Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 mattappe_ joined #gluster
00:10 zapotah joined #gluster
00:10 zapotah joined #gluster
00:12 purpleidea semiosis: haha, i can kill JMWbot if you like. it
00:12 purpleidea it's johnmark who can remove the messages :P
00:13 purpleidea @chanstats
00:13 semiosis purpleidea: johnmark hasn't
00:14 semiosis i just dont like that every time he comes around i get pinged by the bot
00:15 leochill joined #gluster
00:15 diegows joined #gluster
00:15 JoeJulian Could the JMWbot just PM him instead?
00:19 semiosis actually, i could just /ignore the bot, that would solve my problem
00:19 semiosis woio
00:19 semiosis woo*
00:19 JoeJulian True
00:19 semiosis my first /ignore ever
00:22 semiosis purpleidea: when is your hangout?
00:22 semiosis JoeJulian: you gotta do one of these hangouts too!
00:22 purpleidea semiosis: wed
00:22 JoeJulian Yeah, probably...
00:22 purpleidea semiosis: actually it pm's him privately the things that are asked of it privately. and publicly the public things
00:22 semiosis purpleidea: time?
00:23 purpleidea semiosis: 2pm est
00:23 semiosis cool
00:23 JoeJulian I should demonstrate how to hang out on IRC, help a few people, and make bad puns that nobody gets.
00:23 purpleidea semiosis: have any specific things you'd like me to cover?
00:25 purpleidea JoeJulian: do you have a backup of all the data added to your bot somewhere?
00:25 JoeJulian yep
00:25 purpleidea JoeJulian: okay sweet... i hope it's not on a gluster share :P
00:26 JoeJulian Heh, it's image backups since it's hosted at rackspace.
00:26 purpleidea JoeJulian: pretend i just helped you solve some gluster n00b problem... say something that some user in channel says at the end of a convo.
00:26 JoeJulian Finally... sleep time.
00:27 purpleidea ,,(next)
00:27 glusterbot Another satisfied customer... NEXT!
00:27 purpleidea ah! like it?
00:27 JoeJulian @next
00:27 glusterbot JoeJulian: Error: You must be registered to use this command. If you are already registered, you must either identify (using the identify command) or add a hostmask matching your current hostmask (using the "hostmask add" command).
00:27 JoeJulian interesting.
00:27 purpleidea (idea stolen from #git)
00:27 purpleidea yeah i have no idea why the @ doesn't work, i guess there's another command
00:28 JoeJulian @help next
00:28 glusterbot JoeJulian: (next takes no arguments) -- Retrieves your next unread note, if any.
00:28 JoeJulian That's why
00:30 semiosis lol
00:30 JoeJulian @++
00:30 glusterbot JoeJulian: Another satisfied customer... NEXT!
00:30 semiosis purpleidea: cover how you solved master selection
00:31 purpleidea semiosis: ah yes, the question you asked at linuxcon. feel free to bring it up again!
00:31 semiosis will do
00:31 purpleidea semiosis: also: puppet-gluster now integrates with my puppet-keepalived module, so the admin doesn't have to configure this at all... it's _really_ automatic
00:31 semiosis neat
00:32 semiosis it was hard to find the even on the G+ page, but i found it & added it to my calendar
00:32 semiosis s/even /event /
00:32 glusterbot What semiosis meant to say was: it was hard to find the event on the G+ page, but i found it & added it to my calendar
00:33 purpleidea semiosis: cools haha, i almost forgot about it, but then johnmark sent out a reminder "announce" mail :P
00:41 * semiosis afk
00:41 semiosis laters
00:54 mattapperson joined #gluster
00:58 mattapperson joined #gluster
01:00 TrDS left #gluster
01:16 mattappe_ joined #gluster
01:18 mattapperson joined #gluster
01:20 robo joined #gluster
01:22 mattapp__ joined #gluster
01:43 chirino joined #gluster
01:44 shyam joined #gluster
01:44 bala joined #gluster
01:52 harish joined #gluster
02:11 shyam joined #gluster
02:30 Oneiroi joined #gluster
02:36 TheDingy_ joined #gluster
02:37 raghug joined #gluster
02:40 overclk joined #gluster
02:45 harish joined #gluster
02:56 bharata-rao joined #gluster
03:23 mattappe_ joined #gluster
03:24 mattapperson joined #gluster
03:29 rastar joined #gluster
03:39 shubhendu joined #gluster
03:40 mojorison joined #gluster
03:42 kshlm joined #gluster
03:46 RameshN joined #gluster
03:52 sprachgenerator joined #gluster
03:54 kanagaraj joined #gluster
03:58 mohankumar joined #gluster
03:59 itisravi joined #gluster
04:03 mattappe_ joined #gluster
04:03 mattapp__ joined #gluster
04:10 rastar joined #gluster
04:14 SpeeR joined #gluster
04:16 dusmant joined #gluster
04:33 mattappe_ joined #gluster
04:33 mattappe_ joined #gluster
04:34 ngoswami joined #gluster
04:38 kdhananjay joined #gluster
04:38 ppai joined #gluster
04:40 mattappe_ joined #gluster
04:40 rjoseph joined #gluster
04:40 mohankumar joined #gluster
04:43 shyam joined #gluster
04:58 MiteshShah joined #gluster
05:00 shylesh joined #gluster
05:00 CheRi joined #gluster
05:01 CheRi joined #gluster
05:01 bala joined #gluster
05:09 nshaikh joined #gluster
05:17 prasanth joined #gluster
05:20 sahina joined #gluster
05:20 satheesh1 joined #gluster
05:22 ndarshan joined #gluster
05:29 KORG joined #gluster
05:30 psharma joined #gluster
05:30 KORG|2 joined #gluster
05:31 satheesh1 joined #gluster
05:33 satheesh2 joined #gluster
05:41 cfeller joined #gluster
05:43 ricky-ti1 joined #gluster
05:49 vpshastry joined #gluster
05:54 davinder joined #gluster
05:57 vimal joined #gluster
05:57 vpshastry left #gluster
06:05 raghu joined #gluster
06:06 jporterfield joined #gluster
06:11 hagarth joined #gluster
06:12 smellis hey guys, I am seeing a alot of entries in the output of volume heal info on my dist-repl volume with alot of vms on it.
06:13 sticky_afk joined #gluster
06:13 smellis my smaller setups, with 4 vms and two replicated brickes don't show anything unless one of the bricks went away for a while (restart for updates)
06:13 stickyboy joined #gluster
06:13 smellis I'm curios if this is a problem or not?
06:13 smellis running 3.4 btw
06:13 smellis my smaller setups are 3.3
06:14 smellis my testing of 3.4 with smaller loads didn't show this problem
06:14 lalatenduM joined #gluster
06:17 saurabh joined #gluster
06:21 blook joined #gluster
06:26 mohankumar joined #gluster
06:32 KORG|2 joined #gluster
06:48 mohankumar__ joined #gluster
06:49 ndarshan joined #gluster
06:49 sahina joined #gluster
06:51 shubhendu joined #gluster
06:51 blook joined #gluster
06:56 kanagaraj joined #gluster
06:57 RameshN joined #gluster
07:00 bala joined #gluster
07:06 kdhananjay joined #gluster
07:08 armiller left #gluster
07:12 ngoswami joined #gluster
07:15 satheesh2 joined #gluster
07:18 DV joined #gluster
07:20 jtux joined #gluster
07:24 bala joined #gluster
07:26 rastar joined #gluster
07:33 JoeJulian smellis: Probably not a problem. It's normal for busy systems to catch files in a transient state so they'll show up on the heal list.
07:40 ekuric joined #gluster
07:40 mattappe_ joined #gluster
07:52 ndarshan joined #gluster
07:53 kanagaraj joined #gluster
07:53 shubhendu joined #gluster
07:55 bala joined #gluster
07:56 dneary joined #gluster
07:57 eseyman joined #gluster
08:03 s2r2_ joined #gluster
08:21 RameshN joined #gluster
08:22 sahina joined #gluster
08:25 dusmant joined #gluster
08:26 franc joined #gluster
08:26 franc joined #gluster
08:41 hagarth joined #gluster
08:47 vpshastry joined #gluster
08:51 harish joined #gluster
08:52 andreask joined #gluster
08:55 kdhananjay joined #gluster
08:59 Shri joined #gluster
09:00 meghanam joined #gluster
09:00 meghanam_ joined #gluster
09:01 blook joined #gluster
09:05 mgebbe joined #gluster
09:07 JoeJulian purpleidea: Don't you ever sleep? ;)
09:14 eseyman joined #gluster
09:16 bharata_ joined #gluster
09:17 mohankumar joined #gluster
09:20 davinder2 joined #gluster
09:21 purpleidea JoeJulian: no :(
09:22 purpleidea JoeJulian: you neither? You're reading the ml?
09:30 JoeJulian More noticing when it changes as opposed to reading...
09:31 purpleidea fair enough. how's it going?
09:31 JoeJulian Getting close enough to my goal that I can't sleep...
09:31 purpleidea what's the goal?
09:31 rjoseph1 joined #gluster
09:32 aravindavk joined #gluster
09:34 JoeJulian I had a machine get fried when the CoLo's power was screwed up by APC while doing some maintenance. We replaced it but I'm redoing its configuration using openstack havana. Very nice tool, but it still has some bugs to work around.
09:37 purpleidea ah interesting... i haven't played with much openstack yet, i guess that's on some todo list somewhere...
09:38 samppah :O
09:38 JoeJulian I think I've got it all stable now, so now I'm creating a puppet module to install freeswitch since I've always previously done it by hand.
09:39 JoeJulian Only have 3 more VMs to build and we'll be back off rackspace finally.
09:40 purpleidea cool! never played with freeswitch but it's cool stuff
09:41 kshlm purpleidea: I'll retry everything this evening and let you know of my results.
09:41 JoeJulian It's nice. Works flawlessly and makes the cost of 100 lines pretty damned cheap.
09:42 JoeJulian Especially since most of our company's phone calls were between locations. No cost for that now.
09:43 purpleidea JoeJulian: oh sweet... you mean most calls are between your X offices? do you have IP phones everywhere?
09:43 JoeJulian yes
09:43 JoeJulian and yes
09:43 bharata__ joined #gluster
09:43 purpleidea sweet! you running asterisk at all?
09:45 JoeJulian I did, but there were parts of it that just bugged me. FS was built by one of the original developers of * who wanted to rewrite it to be multithreaded.
09:46 JoeJulian rats... I was hoping puppet would be smart enough to do a groupinstall from the @development short name.
09:46 purpleidea didn't know that... i played with asterisk a long long time ago, and i remember having trouble. but i was worse at things too
09:46 purpleidea JoeJulian: it does...
09:47 JoeJulian change from absent to present failed: Could not find package @development
09:47 purpleidea JoeJulian: i think you might like: https://github.com/purpleidea/puppet-y​um/blob/master/manifests/init.pp#L107
09:47 glusterbot Title: puppet-yum/manifests/init.pp at master · purpleidea/puppet-yum · GitHub (at github.com)
09:49 JoeJulian Yeah, that'll save me some typing.
09:49 JoeJulian Well, i'm finally losing concentration so I guess I'll go sleep for a few hours.
09:49 JoeJulian ttfn
09:49 purpleidea JoeJulian: okay, sleep well!
09:49 purpleidea ,,(next)
09:49 glusterbot Another satisfied customer... NEXT!
09:49 purpleidea :P
09:52 KORG|2 joined #gluster
09:55 kanagaraj_ joined #gluster
09:59 nshaikh joined #gluster
10:00 calum_ joined #gluster
10:02 KORG|2 joined #gluster
10:09 getup- joined #gluster
10:14 flrichar joined #gluster
10:17 kanagaraj joined #gluster
10:20 ells joined #gluster
10:22 ells joined #gluster
10:23 tryggvil joined #gluster
10:24 khushildep joined #gluster
10:25 mohankumar__ joined #gluster
10:27 dusmant joined #gluster
10:28 vsa joined #gluster
10:30 kdhananjay joined #gluster
10:30 vsa Hi all! Gluster 3.4.2, 2 nodes , split-brain . When i do 'heal full', a lot of errors, like "no gfid present skipping" for many files. How I can fix it?
10:31 F^nor joined #gluster
10:36 jporterfield joined #gluster
10:39 flrichar joined #gluster
10:40 KORG|2 joined #gluster
10:41 vsa Hi all! Gluster 3.4.2, 2 nodes , split-brain . When i do 'heal full', a lot of errors, like "no gfid present skipping" for many files. How I can fix it?
10:43 KORG|2 joined #gluster
10:44 KORG|2 joined #gluster
10:44 KORG|2 joined #gluster
10:45 aravindavk joined #gluster
10:49 dusmant joined #gluster
10:54 KORG|2 joined #gluster
10:54 vsa Hi all! Gluster 3.4.2, 2 nodes , split-brain . When i do 'heal full', a lot of errors, like "no gfid present skipping" for many files. How I can fix it?
10:57 KORG|2 joined #gluster
11:01 calum_ joined #gluster
11:02 dusmant joined #gluster
11:02 flrichar joined #gluster
11:04 KORG|2 joined #gluster
11:07 purpleidea kshlm: sorry for ignoring your earlier, somehow i didn't see your message until now. thanks for testing and ping me anytime.
11:07 purpleidea s/your/you/
11:07 glusterbot What purpleidea meant to say was: kshlm: sorry for ignoring you earlier, somehow i didn't see your message until now. thanks for testing and ping me anytime.
11:17 KORG|2 joined #gluster
11:20 KORG|2 joined #gluster
11:20 ababu joined #gluster
11:24 satheesh1 joined #gluster
11:24 KORG|2 joined #gluster
11:24 diegows joined #gluster
11:30 aravindavk joined #gluster
11:31 KORG|2 joined #gluster
11:34 edward2 joined #gluster
11:43 kkeithley1 joined #gluster
11:46 ndarshan joined #gluster
11:50 hagarth joined #gluster
11:51 kkeithley1 joined #gluster
11:56 rjoseph joined #gluster
11:58 Shri joined #gluster
11:59 s2r2_ joined #gluster
12:04 satheesh joined #gluster
12:07 CheRi joined #gluster
12:08 dusmant joined #gluster
12:12 itisravi joined #gluster
12:26 tryggvil joined #gluster
12:27 kanagaraj joined #gluster
12:29 jporterfield joined #gluster
12:38 ira joined #gluster
12:39 vsa Hi all! Gluster 3.4.2 , how i can fix split-brain, if in "heal info split-brain"  i see "#2014-01-21 10:20:07 / "        without names 603 times ?
12:48 shyam joined #gluster
12:49 satheesh joined #gluster
12:54 plarsen joined #gluster
13:01 mattappe_ joined #gluster
13:03 chirino joined #gluster
13:03 Cenbe joined #gluster
13:04 mattapperson joined #gluster
13:05 mattappe_ joined #gluster
13:06 Cenbe joined #gluster
13:07 Cenbe joined #gluster
13:08 mattapperson joined #gluster
13:14 kdhananjay joined #gluster
13:15 T0aD joined #gluster
13:19 rjoseph joined #gluster
13:22 dusmant joined #gluster
13:22 shyam joined #gluster
13:24 mtanner_ joined #gluster
13:27 rwheeler joined #gluster
13:29 sroy joined #gluster
13:31 P0w3r3d joined #gluster
13:32 sticky_afk joined #gluster
13:32 stickyboy joined #gluster
13:33 ababu joined #gluster
13:33 mattappe_ joined #gluster
13:34 mattappe_ joined #gluster
13:44 B21956 joined #gluster
13:49 bala joined #gluster
13:53 qdk joined #gluster
13:56 japuzzo joined #gluster
13:57 andreask joined #gluster
14:03 sinatributos joined #gluster
14:03 sinatributos Hi.
14:03 glusterbot sinatributos: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:03 zapotah joined #gluster
14:05 sinatributos I want to build a distributed-replicated volume with 2 bricks on 2 servers (initially). I will add 4 bricks from 2 other servers later to the volume. Is it possible to do this?
14:08 mattapperson joined #gluster
14:09 kshlm joined #gluster
14:10 purpleidea sinatributos: yes
14:11 zapotah joined #gluster
14:11 zapotah joined #gluster
14:12 purpleidea why don't you try prototyping the setup first? You can use ,,(vagrant)
14:12 glusterbot (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @ https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/,
14:12 glusterbot or (#5) https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
14:13 davinder joined #gluster
14:14 jskinner_ joined #gluster
14:16 rwheeler joined #gluster
14:17 zapotah joined #gluster
14:17 zapotah joined #gluster
14:17 dbruhn joined #gluster
14:18 ngoswami joined #gluster
14:19 sinatributos purpleidea: thanks. I'm now setting up 2 VM's to prototype but I'm affraid my computer will be to slow and I do not have access to the real HW yet.
14:21 purpleidea sinatributos: the vm's i'm currently using for glusterfs have 512mb ram.
14:21 purpleidea keep in mind that i'm not interested in testing performance with this setup, but functionality.
14:22 purpleidea 512mb should be sufficient for you to learn how to add/remove hosts as long as you have a cpu with vt extensions
14:22 sinatributos purpleidea: Thanks, I'm building the testing scenario now. I will probably come back with more questions :-)
14:27 itisravi joined #gluster
14:29 ells joined #gluster
14:30 zapotah joined #gluster
14:32 purpleidea sinatributos: okay, good luck!
14:34 purpleidea Can anyone test if running a -o remount from ro to rw works on a gluster fuse mount on 3.5.0beta1 ? I might have hit an anomaly of my setup, or maybe this doesn't work anymore. Note that the mount command showed the change to 'rw' but it still didn't work writing files until i unmounted and mounted again.
14:35 glusterbot New news from newglusterbugs: [Bug 1056085] logs flooded with quota enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1056085>
14:35 bennyturns joined #gluster
14:35 abyss^ I have some data already on my disc. I installed gluster in replicate mode and create gluster volume with brick where the data lying but gluster doesn't see this data. How can I make gluster on existing data?
14:36 * Peanut waves at purpleidea, thanks for your quick reply.
14:37 Peanut And no, I'm afraid I can't test your mounting question at the moment.
14:38 vimal joined #gluster
14:40 abyss^ when I try rebalance I get: Volume saas_bookshelf is not a distribute volume or contains only 1 brick. But I have two bricks
14:41 pravka joined #gluster
14:41 dbruhn abyss^, a rebalance will only rebalance a distributed volume, the replicated volume should have the same data on both
14:42 dbruhn I am not sure what impact a rebalance has on a striped volume, but a straight two brick replication volume can't rebalance
14:44 psyl0n joined #gluster
14:44 lalatenduM johnmark, ping
14:45 dbruhn abyss^, can you move the data out of the brick directory and copy it back in via the mount?
14:46 abyss^ dbruhn: yes, ofcourse you are right, but how can I set up gluster on existing data...
14:46 Copez joined #gluster
14:46 Copez Hi all
14:47 Copez Could someone help me out with a designisue :)
14:47 abyss^ dbruhn: no I can't because I have to prepare data earlier. It's long story:)
14:48 abyss^ dbruhn: I've read that it's possible to set up gluster on existing data...
14:48 abyss^ so I would try
14:48 dbruhn abyss^: understood, I've not done it, are any of the files showing up through the mount?
14:49 abyss^ dbruhn: no:(
14:52 mattappe_ joined #gluster
14:53 JoeJulian ~splitbrain | vsa
14:53 glusterbot vsa: To heal split-brain in 3.3+, see http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ .
14:54 vimal joined #gluster
14:54 abyss^ JoeJulian: it's not for me?
14:54 dbruhn abyss^: I think if you can get the extended attributes setup correctly on the existing data, it should replicate it on a self heal to the second brick, and show up in the file system. But I could be wrong
14:55 lalatenduM glusterbot,
14:56 Peanut JoeJulian: my bricks are producing an awful lot of logfiles (/var/log/glusterfs/bricks/export-brick0-sdb1.log) - how can I turn that off again? We turned that on when looking at my kvm live-migration issue.
14:56 JoeJulian abyss^: Creating a volume with data pre-loaded on one of the bricks is technically an unsupported anomaly that works. When creating your volume, however, the pre-loaded brick has to be the leftmost: ie. gluster volume create foo replica 2 server1:/data/preloaded server2:/data/empty
14:56 Peanut And I'm pretty sure I turned it off again, but 'df' shows otherwise ;-)
14:57 Peanut And how do I 'SIGHUP' it or something, so it starts on a new logfile?
14:58 JoeJulian Peanut: Unfortunately, last time I checked, "gluster volume reset..." didn't actually reset the log levels to default. Set them to INFO and then reset again if you like nice tidy "volume info" output like I do.
14:58 JoeJulian We use copytruncate with our logrotate scripts. It's much easier.
14:59 * Peanut currently has the root partition at 100%, which I'm trying to fix without crashing things..
14:59 badone joined #gluster
14:59 JoeJulian >/var/log/glusterfs/bricks/export-brick0-sdb1.log
14:59 Peanut I already did an rm :-(
15:00 JoeJulian volume log rotate <VOLNAME>
15:01 JoeJulian I thought I HUP did it too, but I'm not positive.
15:01 jbrooks joined #gluster
15:01 JoeJulian s/I HUP/a HUP/
15:01 glusterbot What JoeJulian meant to say was: I thought a HUP did it too, but I'm not positive.
15:01 Peanut Phew, 1.8G available, the logrotate worked - writing that one down.
15:02 Peanut oooh, neat, it does it on both gluster halves!
15:02 dbruhn JoeJulian, do you know if anyone has submitted that change for the copy-truncate to the project?
15:02 theron joined #gluster
15:02 jobewan joined #gluster
15:02 dbruhn Since the standard log rotate installed doesn't rotate those pesky problem logs
15:02 JoeJulian Not that I'm aware of.
15:03 kaptk2 joined #gluster
15:03 johnmilton joined #gluster
15:03 JoeJulian I manage all logs with puppet for every module I write. That way I'm aways sure the logs are being rotated the way I want them and they're all logged to logstash.
15:05 glusterbot New news from newglusterbugs: [Bug 1056085] logs flooded with invalid argument errors with quota enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1056085>
15:05 vimal joined #gluster
15:06 JoeJulian Interesting bug
15:07 Peanut Is there a way to see which log-level I'm currently at? I don't see a 'gluster volume get' in the manpage?
15:07 dbruhn gluster volume info
15:07 dbruhn will output the log level if it's different than the default
15:07 drscream left #gluster
15:08 JoeJulian dbruhn: not necessarily.
15:08 Peanut dbruhn: Seems that it doesn't, sorry.
15:08 JoeJulian dbruhn: If you change the log level, then reset the log level, the log level will not be displayed but it will not reset back to the default.
15:08 JoeJulian someone should file a bug report on that...
15:08 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:11 dbruhn Really, never knew.
15:12 Technicool joined #gluster
15:13 JoeJulian bug 1056147
15:13 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1056147 unspecified, unspecified, ---, kaushal, NEW , volume reset of loglevels does not reset the log levels to defaults
15:13 SteveCooling joined #gluster
15:14 Peanut JoeJulian: that was quick :-) Maybe add that gluster volume info does think it gets reset?
15:14 badone joined #gluster
15:15 SteveCooling hi guys, when issuing "gluster volume heal MYVOL info" i get some heal-info entries that are only "<gfid:e4dea3cc-ba52-4724-926a-55d4878e75f4>" and not a file name. what are those?
15:15 JoeJulian Peanut: Yeah, I mentioned that in the description.
15:15 JoeJulian SteveCooling: http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
15:15 glusterbot Title: What is this new .glusterfs directory in 3.3? (at joejulian.name)
15:17 lalatenduM JoeJulian, I think seen this bug earlier , not remember the bug number though
15:19 JoeJulian SteveCooling: Combine that info with the way gluster handles marking files for self-heal. It adds a link to the gfid file in .glusterfs/indices/xattrop for files that need healed. Until it has the inode in its dict, that gfid file cannot be resolved to a filename.
15:19 abyss^ JoeJulian: sorry, but I don't get it a little;) So it possible to do something on existing data or not? As dbruhn said I thought data are not visible because of extending attributes on that files...
15:19 JoeJulian One way to resolve that to a filename is to do a heal...full.
15:20 JoeJulian abyss^: Where's your existing data?
15:20 JoeJulian ~pasteinfo | abyss^
15:20 glusterbot abyss^: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
15:21 samsamm joined #gluster
15:21 JoeJulian lalatenduM: It didn't show up in my search. /shrug
15:22 abyss^ JoeJulian: on ext4 partition. http://ur1.ca/gh0ix - output
15:22 glusterbot Title: #70334 Fedora Project Pastebin (at ur1.ca)
15:23 lalatenduM JoeJulian, actually few volume set reset does not work as expected, log reset might be one of them :)
15:23 JoeJulian abyss^: lol.. that's not quite what I was asking... :D Is it on glusterfs-migrate1 or glusterfs-migrate2?
15:24 abyss^ JoeJulian: on glusterfs-migrate1 , sorry;)
15:24 JoeJulian lalatenduM: Ah, I see. I wasn't searching generic enough.
15:24 wushudoin joined #gluster
15:24 abyss^ JoeJulian: I had that issue, so I have to set up new gluster servers: http://www.gluster.org/pipermail/glu​ster-users/2013-December/038264.html Because I can't solve this issue...
15:24 lalatenduM JoeJulian, no prob :) it should be fine
15:24 glusterbot Title: [Gluster-users] Error after crash of Virtual Machine during migration (at www.gluster.org)
15:25 JoeJulian abyss^: Well that usually works. Try "gluster volume heal saas_bookshelf full"
15:26 abyss^ JoeJulian: nothing happen
15:27 chirino joined #gluster
15:27 abyss^ JoeJulian: if it possible to repair the issue on maillist it would be much better that this, could you look on it?
15:29 JoeJulian abyss^: Using the ,,(glossary) when you refer to the "first gluster" are you referring to a "server"?
15:29 glusterbot abyss^: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
15:30 sinatributos NEW Q: In slide 14 of this slide-show hosted at the gluster website (http://www.gluster.org/community/documentation/ima​ges/0/0c/Glusterfs_for_sysadmins-justin_clift.odp) it says that gluster should be built over bricks <= than 16TB. I have not seen this limitation in the admin guide. Is this an old requirement?
15:30 JoeJulian Never heard of it.
15:30 ndk joined #gluster
15:31 JoeJulian jclift: ?
15:31 abyss^ JoeJulian: you want from to clarify that maillist problem?
15:31 jclift JoeJulian: ?
15:31 abyss^ s/you want from/you want from me/
15:31 glusterbot What abyss^ meant to say was: JoeJulian: you want from me to clarify that maillist problem?
15:32 jclift Oh, mailing list problem?  Are details in IRC history?
15:32 * jclift starts looking back
15:32 JoeJulian No, jclift, two lines up from my question mark. :D
15:32 jclift Ahhh, got it.
15:32 abyss^ I am lost;)
15:32 jclift sinatributos: Ahhh, I'm pretty sure I copied that from another guys presentation, to save time.
15:33 JoeJulian lol
15:33 abyss^ I do not know what going on now;)
15:33 jclift So, I'm not authorative on it.
15:33 * jclift looks around for the GlusterFS limits doc
15:33 jclift Pretty sure there was one around somewhere
15:33 JoeJulian abyss^: In IRC there's often more than one conversation happening at once. Generally you can just follow the ones that start with your name.
15:34 abyss^ JoeJulian: Yes, I try but I am lost;p Which answer is to me and not to me;p
15:35 abyss^ JoeJulian: Can you help with the issue that I paste from maillist? I don't wanna move data from old gluster to new ones... Better way is just repair that problem I think:)
15:35 glusterbot New news from newglusterbugs: [Bug 1056147] volume reset of loglevels does not reset the log levels to defaults <https://bugzilla.redhat.co​m/show_bug.cgi?id=1056147>
15:35 mattappe_ joined #gluster
15:35 sinatributos jclift: hehehe ok. I've seen gluster installation over bigger bricks so I guess this does not apply. Thanks!!
15:35 jclift sinatributos: No worries. :)
15:36 sinatributos jclift: And thanks for the tutorial as well, really nice.
15:36 jclift Any time.
15:36 sinatributos :)
15:36 * jclift should get around to doing more tutorial stuff soon.  Need to finish up some other bits first tho.
15:37 abyss^ JoeJulian: if it impossible to repair then it possible to make new gluster on existing data? I'd like to do smth like: rsync data from gluster server 1 to another disk, then set up new gluster servers on that datas and reconnect client to new gluster servers...
15:38 JoeJulian abyss^: I'm replying to your email.
15:38 mattapperson joined #gluster
15:38 abyss^ JoeJulian: oh thank you. It's production environment and everybody looking on me because I introduce glusterfs to the company;)
15:39 abyss^ JoeJulian: if you have a time we can try to solve the issue here (on irc).
15:40 dbruhn jclift, could that 16TB limitation come from the XFS limitation on 32 bit hosts
15:42 jclift dbruhn: Possible?
15:42 jclift Possible. I mean.
15:43 dbruhn That's the only thing I could think would drive a requirement like that from earlier limitations
15:43 saurabh joined #gluster
15:44 JoeJulian abyss^: Sure, I'll continue to help you here, just the reply seemed better suited to inline comments on your original email.
15:44 abyss^ JoeJulian: I modified some files here: /var/lib/glusterd/vols/sa_bookshelf/ to make glusterfs work after that VM's crash. It's better because I can do heal info etc after I do command replace-brick, but I still can't abort replace-brick...
15:46 badone joined #gluster
15:47 rotbeard joined #gluster
15:47 abyss^ now I can't abort this because I get: storage-1-saas, is not a friend. I migrated glusterfs servers this way: http://gluster.org/community/documen​tation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server
15:48 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
15:48 abyss^ (because migrate via gluster way didn't work after crash)
15:50 JoeJulian So probe storage-1-saas from the good server.
15:51 abyss^ JoeJulian: I haven't got any lines in info file about rebalance or replace
15:51 JoeJulian hrm... I wonder if there are ,,(extended attributes) on the brick to mark that then.
15:51 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
15:51 benjamin__ joined #gluster
15:53 abyss^ JoeJulian: storage-1-saas doesn't exist. It was glusterf server where the data had to be migrated, but when Virtual Machine crashed I couldn't proceed migrate because of errors I described on maillist:( So should I set up arbitrary glusterfs server called this storage-1-saas add to peer and abort?
15:55 abyss^ JoeJulian: I had some lines about replace-brick in this file: sa_bookshelf.storage-gfs-3-prd.ydp-shared.vol I removed them all and change some lines that referenced to replace-brick
15:55 JoeJulian No
15:55 JoeJulian The .vol files are created based on the info file.
15:55 JoeJulian Have you tried stopping and starting the volume?
15:56 abyss^ OK. But I did it (about one month ago to make  glusterfs work)
15:56 JoeJulian Also fpaste that info file.
15:57 abyss^ JoeJulian: I moved glusterfs that way: http://gluster.org/community/documen​tation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server So it should refresh volume info or not? I can't do that now, because glusterfs working now , I can't just turn it off...
15:57 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
15:58 nshaikh joined #gluster
15:58 abyss^ but it moved with this problem I hope if I moved that way glusterfs servers to another place it would be clear the issue:)
16:00 JoeJulian 1. check the extended attributes of each brick and look for clues. Use the command glusterbot gave you earlier. If nothing seems obvious to you, fpaste the results of the getfattr query. 2. fpaste the info file unless you're sure you don't want a more experienced user to take a look at it.
16:01 zapotah joined #gluster
16:01 jag3773 joined #gluster
16:01 abyss^ JoeJulian: the files: http://ur1.ca/gh0qc
16:01 glusterbot Title: #70354 Fedora Project Pastebin (at ur1.ca)
16:02 bala1 joined #gluster
16:02 mohankumar__ joined #gluster
16:03 bugs_ joined #gluster
16:06 abyss^ JoeJulian: probably I have to read about extended attr because I don't get how it would be help me to solve the issue with replace-brick:)
16:08 abyss^ all I want is just I would like to cancel the replace-brick command;) Now if I do command: gluster volume rebalance sa_bookshelf fix-layout start
16:08 abyss^ I get ofcourse: Rebalance failed as replace brick is in progress on volume sa_bookshelf
16:08 JoeJulian Sure, there's an article written about some of the ,,(extended attributes) in the following factoid. That's not all of them and gluster uses xattrs extensively for many things. I'm writing a puppet module for something completely unrelated or I'd spin up some VMs and kill one of them in the middle of a replace-brick to see what it looks like. Unfortunately, my boss would rather I get my work done.
16:08 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
16:09 JoeJulian Plus, if I do all the work, how are you going to learn anything? :D
16:10 purpleidea Peanut: no worries!
16:10 Peanut purpleidea: I've got lots of worries, just see the mailing list ;-)
16:12 purpleidea :P
16:12 abyss^ JoeJulian: I learning a lot while I tried to solve the issue ;p Now I just want to fix it;) OK. I going to read about that, but you should remember my english isn't so good, so I hope I understand this article and advance a little with that problem;)
16:13 daMaestro joined #gluster
16:13 japuzzo_ joined #gluster
16:13 JoeJulian If it were me, and I was under the gun, I would run the command listed by glusterbot (#1) on the brick root and see what I can see. The xattrs are mostly self-evident.
16:15 hagarth joined #gluster
16:15 abyss^ JoeJulian: I did that on one file getfattr -m .  -d -e hex filename and some chars and figures there but the meaning of that is mystery for me;)
16:16 ira joined #gluster
16:16 sprachgenerator joined #gluster
16:16 ira joined #gluster
16:16 JoeJulian fpaste it.
16:17 abyss^ JoeJulian: http://fpaste.org/70367/03210431/
16:17 glusterbot Title: #70367 Fedora Project Pastebin (at fpaste.org)
16:17 abyss^ on first gluster server on first brick
16:17 abyss^ (/ydp/shared
16:17 abyss^ )
16:18 JoeJulian Is that the one you were trying to replace?
16:18 neofob joined #gluster
16:18 sroy__ joined #gluster
16:19 abyss^ JoeJulian: yes
16:19 JoeJulian There's nothing that looks remotely like there's a replace-brick going on in anything you've shared. My only possible assumption without further testing is that this is something in memory that would be cleared by restarting the volume.
16:20 JoeJulian If you'd like to wait 8 hours (when I finish my work day) I can try to duplicate your issue.
16:20 abyss^ I would do that maybe in that time nobodys will see the interval if I stop and star voluume;)
16:20 JoeJulian It is interesting though...
16:21 abyss^ JoeJulian: yes, the same bug was earlier but in version 3.3.0 and I read that it would be repaired
16:22 JoeJulian wait...
16:22 JoeJulian I think my coffee's finally kicking in...
16:22 abyss^ JoeJulian: In my country for eight hours will be 3 a.m. :)
16:22 JoeJulian You did a replace-brick on a replicated volume.
16:22 abyss^ yes
16:23 abyss^ then VM crashed during that
16:23 JoeJulian It thinks it's still replacing the brick...
16:23 abyss^ yes
16:23 abyss^ even that I remove the all lines in /var/lib/glusterd/vols/sa_bookshelf concern replace-brick
16:23 JoeJulian Did you try: gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared storage-1-saas:/ydp/shared commit force
16:23 abyss^ *removed
16:23 LoudNoises joined #gluster
16:24 abyss^ JoeJulian: I get: storage-1-saas, is not a friend
16:24 JoeJulian hmm
16:24 JoeJulian Sure. Create a dummy storage-1-saas and befriend it.
16:25 abyss^ OK, just add to peer do replace-brick abort force, and remove from peer?
16:25 JoeJulian No, not abort, commit.
16:25 abyss^ ok commit
16:26 abyss^ but if I do commit this isn't start to copy the data?
16:26 JoeJulian It should work unless it fails since you've modified it's state files.
16:26 JoeJulian It's a replicated volume. The commit says you're done with the process and just make it final.
16:27 abyss^ JoeJulian: yes, I had to modified because without that modifications I can't do enythingh (heal, replace-brick, nothing at all)
16:27 JoeJulian Since it's replicated, if you failed to copy any files from the source, a heal...full will fix it.
16:27 JoeJulian Not arguing, I'm just forecasting possibilities.
16:28 abyss^ yes, I just write why I did that ;)
16:29 keytab joined #gluster
16:29 abyss^ JoeJulian: ok, let's see, sorry as I said my english not so good... And I don't get it... I don't want replace any data, any brick, and if I add new gluster that called: storage-1-saas and do: commit force it not move data and glusterfs server to that server?
16:31 cfeller joined #gluster
16:33 JoeJulian Right. "commit force" will not move data. Your new brick will have replaced the old one. Since those bricks are replicated, the replica that still has the data will be able to populate the new brick by doing a "gluster volume heal $vol full".
16:34 badone joined #gluster
16:34 tdasilva joined #gluster
16:37 JoeJulian In other words, storage-gfs-3-prd:-ydp-shared and storage-gfs-4-prd:-ydp-shared have the same data. If you "gluster volume replace brick sa_bookshelf storage-gfs-3-prd:-ydp-shared storage-foo:/bar commit force" (without even issuing the start command ever) then storage-foo:/bar will be an empty replacement for storage-gfs-3-prd:-ydp-shared. Since it's a replicated volume, however, the client will not see any difference. The files on storage-gfs
16:37 JoeJulian -4-prd:-ydp-shared will be replicated to storage-foo:/bar as they're accessed. Issuing "gluster volume heal sa_bookshelf full" will cause the entire contents of storage-gfs-4-prd:-ydp-shared to be healed to storeage-foo:/bar
16:37 abyss^ JoeJulian: what you mean "new brick"? And don't wanna make a new brick, all things should stay as now. Brick1: storage-gfs-3-prd:/ydp/shared is OK I don't wanna: Brick1: storage-1-saas:/ydp/shared ;) Please forgive me for that stupid questions but, but its production;) and I want make myself sure if you understanding me (even with my english;))
16:39 SpeeR joined #gluster
16:39 abyss^ JoeJulian: yes, exactly. And I don't want data on new brick on new server. I just want to cancel this, because I moved glusterfs servers that way (unfortunatelly with this issue;)): http://gluster.org/community/documen​tation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server
16:39 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
16:39 JoeJulian And yes. I know exactly what you want done. If "commit force" is successful, your volume will be back to normal. If it's not successful it will remain in its current state.
16:40 abyss^ JoeJulian: OK:) Thank you. And thank you for your patience :)
16:40 micu joined #gluster
16:42 JoeJulian If I'm wrong... you can just "gluster volume replace-brick storage-1-saas:/ydp/shared sa_bookshelf storage-gfs-3-prd:/ydp/shared commit force" to get it back.
16:42 abyss^ OK:)
16:42 JoeJulian I wonder if that can hit the path or prefix notice
16:44 B21956 joined #gluster
16:45 abyss^ Sorry but I don't understand what you wonder about;) Could you clarify, ok I'm going to do that now. I set up fast glusterfs server called storage-1-saas the add this to peer, do commit force. BTW: why I can't do abort force?;)
16:45 purpleidea kshlm: new puppet-gluster+vagrant with lots of new goodness :) night!
16:45 purpleidea (pushed to git master)
16:46 JoeJulian You could try abort force, but I've never seen it be successful.
16:46 abyss^ ;)
16:46 abyss^ my hands are shaking;)
16:47 benjamin__ joined #gluster
16:47 JoeJulian purpleidea: bed time?!?! It's work time now...
16:47 abyss^ hehe
16:48 purpleidea JoeJulian: you didn't sleep enough :P now i didn't.
16:48 JoeJulian I got my 3 hours...
16:49 mohankumar__ joined #gluster
16:49 JoeJulian Yes, abyss^, you're taking advice from a guy who only got 3 hours sleep and is running on espresso. Feel confident.
16:49 abyss^ ;)
16:49 abyss^ But I reading your blog and I have more confident;)
16:50 abyss^ but it's production so that way my hands are shaking;)
16:50 JoeJulian Yeah, I've figured out a thing or two.
16:50 calum_ joined #gluster
16:50 vpshastry joined #gluster
16:50 vpshastry left #gluster
16:54 abyss^ JoeJulian: gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared storage-1-saas:/ydp/shared abort
16:54 zerick joined #gluster
16:54 abyss^ storage-1-saas, is not connected at the moment
16:54 abyss^ OK
16:54 abyss^ so I tried: # gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared storage-1-saas:/ydp/shared commit abort
16:54 abyss^ brick: storage-gfs-3-prd:/ydp/shared does not exist in volume: sa_bookshelf
16:54 abyss^ ;)
16:55 Alex I'm curious - what sort of caches does gluster maintain? I suppose, more accuracy, I'm concerned about the lack of read performance when I've got a gluster pair and most CPU utilisation seems to hit one of the bricks only
16:55 purpleidea JoeJulian: hey joe, i meant to ask you... can you point me to somewhere that explains how gluster decides _where_ to read from? what i mean is, does it just randomly pick one of the replicas, or... ?
16:56 abyss^ JoeJulian: now: gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared storage-1-saas:/ydp/shared abort
16:56 abyss^ brick: storage-gfs-3-prd:/ydp/shared does not exist in volume: sa_bookshelf
16:56 abyss^ even with abort...
16:57 mohankumar__ joined #gluster
16:57 pkoro joined #gluster
16:57 JoeJulian Alex, purpleidea: last time I looked (which was two years ago) it sends the lookup() request sequentially to each server in the subvolume starting with the first. It does this in as close to parallel as you can using a serial network connection. The first server to respond to that lookup() will serve that fd until it's closed.
16:59 Alex JoeJulian: Oh, that's interesting. Currently I've got ailing clients on one node not the other - I wonder if that's FD ownership
16:59 JoeJulian Alex, purpleidea: In practice this means that as long as the first server in the list isn't overloaded to the point of being slowed down, it'll usually win. Is this a problem? Perhaps, but once it's overloaded enough, the next server will pick up the slack so I'm not sure.
16:59 purpleidea JoeJulian: interesting... will need to do more research! i've actually really gotta crash. night!
17:00 JoeJulian purpleidea: good morning. Try to not let the birds chirping keep you awake.
17:00 Alex JoeJulian: I have two boxes, both which run nginx fronting most of the gluster volume, and currently nginx is "so busy" it's not accepting conns
17:00 purpleidea naw i can sleep through all of that :)
17:01 abyss^ JoeJulian: any ideas I have still brick: Brick1: storage-gfs-3-prd:/ydp/shared
17:01 JoeJulian Alex: Have you seen my philosophies on caching on my blog? ,,(php)
17:01 glusterbot Alex: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH
17:01 glusterbot --negative-timeout=HIGH --fopen-keep-cache
17:02 JoeJulian abyss^: abort force?
17:02 Alex JoeJulian: I'll have a read - but for ref, this is the origin for a CDN - so it's just large file xfers which "should" be cached -- will double check
17:02 abyss^ the same
17:03 abyss^ JoeJulian: the same, and additional abort force doesn't work (force command with abort)
17:03 abyss^ I did smth like this: gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared storage-1-saas:/ydp/shared abort force and get: Usage: volume replace-brick <VOLNAME> <BRICK> <NEW-BRICK> {start|pause|abort|status|commit [force]}
17:03 JoeJulian Alex: Any chance of misses?
17:04 JoeJulian abyss^: I didn't think that would work, but it was worth a try...
17:04 Alex JoeJulian: Reasonable - think thundering herd after a set of misses, but it's affecting performance more than I would expect.. just looking through now to get an idea of the amount of requests.
17:04 abyss^ JoeJulian: ;)
17:04 s2r2_ joined #gluster
17:05 abyss^ JoeJulian: in info file I have that brick, but whathever I would try with /ydp/shared on storage-gfs-3 I get this error:/
17:06 iksik_ joined #gluster
17:07 ndk` joined #gluster
17:07 JoeJulian Alex: gluster server and nginx on the same machine? How big's your volume?
17:07 tryggvil_ joined #gluster
17:07 psyl0n_ joined #gluster
17:08 eclectic_ joined #gluster
17:08 d-fence_ joined #gluster
17:09 sprachgenerator joined #gluster
17:09 ells joined #gluster
17:09 Krikke joined #gluster
17:09 Alex JoeJulian: Yes, and ~3TB used
17:09 lanning_ joined #gluster
17:09 Cenbe joined #gluster
17:09 jag3773 joined #gluster
17:09 iksik joined #gluster
17:09 kaptk2 joined #gluster
17:09 JoeJulian Alex: Sorry, how big as in how many bricks... how about just ,,(pasteinfo)?
17:09 glusterbot Alex: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:09 mikedep333 joined #gluster
17:10 jporterfield joined #gluster
17:10 jclift joined #gluster
17:10 yosafbridge joined #gluster
17:10 mohankumar__ joined #gluster
17:10 Alex JoeJulian: Just two bricks - https://gist.github.com/78f2a09ae20a8e040a81
17:10 glusterbot Title: gist:78f2a09ae20a8e040a81 (at gist.github.com)
17:11 Alex NB: most of our disk accesses are "local" - think nginx hitting root /shared/[...]
17:11 Alex sorry, I said two bricks - I meant two peers. D'oh :)
17:12 JoeJulian Heh, I got it. :D
17:12 JoeJulian abyss^: I'm stuck for ideas that don't involve interrupting your users. Sorry.
17:13 abyss^ JoeJulian: I detach from peer and I back to first issue;)
17:14 xavih joined #gluster
17:14 JoeJulian Alex, purpleidea: Ok, there has been a change in that time. See ,,(undocumented options) and search for cluster.read-hash-mode and cluster.choose-local.
17:14 glusterbot purpleidea: Undocumented options for 3.4: http://www.gluster.org/community/documentat​ion/index.php/Documenting_the_undocumented
17:14 glusterbot Undocumented options for 3.4: http://www.gluster.org/community/documentat​ion/index.php/Documenting_the_undocumented
17:15 * Alex reads
17:15 JoeJulian Alex: The latter may be what you're looking for.
17:15 sinatributos Hi. Can anyone confirm if the debian wheezy packages for version 3.4.1 include rdma IB support or if I have to compile from source to use this feature?
17:15 Alex It sounds like it, yeah.
17:15 sinatributos Sorry, I mean 3.4.2
17:15 JoeJulian semiosis: ^^
17:16 abyss^ JoeJulian: Ok. If you would and you will do that - test with crashing VM please write your conclusion here I'll read this morning, now I have go to my family;) They are worried where I am;) Thank you a lot and bye! :)
17:16 JoeJulian sinatributos: I'd be surprised if it's not. It's the standard tarball.
17:18 andreask joined #gluster
17:22 semiosis sinatributos: should be included afaik
17:25 sinatributos JoeJulian: semiosis: thanks.
17:29 jskinner_ joined #gluster
17:37 Mo__ joined #gluster
17:51 sroy_ joined #gluster
17:55 andreask joined #gluster
17:56 hagarth joined #gluster
18:00 jskinner_ joined #gluster
18:16 kshlm joined #gluster
18:27 jskinner_ joined #gluster
18:27 SpeeR joined #gluster
18:29 SpeeR joined #gluster
18:31 jbrooks joined #gluster
18:32 jbrooks left #gluster
18:32 jbrooks joined #gluster
18:33 khushildep joined #gluster
18:36 jbrooks left #gluster
18:36 jbrooks joined #gluster
18:38 ira joined #gluster
18:39 ira joined #gluster
18:43 failshell joined #gluster
18:44 failshell let's say I have a volume named test in 2 different clusters, can I replicate it from A to B and also from B to A? to have a multi-master setup?
18:45 jbrooks joined #gluster
18:46 zapotah joined #gluster
18:46 zapotah joined #gluster
18:46 semiosis failshell: maybe, but not using geo-replication
18:46 semiosis afaik
18:47 failshell yeah i had to use force to setup the second (B to A), but it fails to start
18:48 semiosis ha
18:48 semiosis reminds me of the radiohead song.... "you can force it but it will not come..."
18:48 failshell http://blog.bittorrent.com/2013/09/1​0/sync-hacks-how-to-use-bittorrent-s​ync-as-geo-replication-for-storage/
18:48 failshell just stumbled on this
18:49 semiosis depending on your goals there are many ways to do this kinda thing
18:49 semiosis afk
18:56 plarsen joined #gluster
18:57 SpeeR joined #gluster
18:59 SpeeR_ joined #gluster
19:04 lpabon joined #gluster
19:05 muhh joined #gluster
19:10 SpeeR joined #gluster
19:26 tryggvil joined #gluster
19:38 davinder joined #gluster
19:46 zapotah joined #gluster
19:52 RedShift joined #gluster
19:56 gdubreui joined #gluster
20:02 TrDS joined #gluster
20:03 SpeeR joined #gluster
20:05 rwheeler joined #gluster
20:09 TrDS joined #gluster
20:10 jag3773 joined #gluster
20:16 kshlm joined #gluster
20:24 s2r2_ joined #gluster
20:31 SpeeR joined #gluster
20:33 flrichar joined #gluster
20:34 kshlm joined #gluster
20:39 MacWinner joined #gluster
20:45 chirino joined #gluster
21:08 glusterbot New news from newglusterbugs: [Bug 1056276] Self-Heal Daemon is consuming excessive CPU <https://bugzilla.redhat.co​m/show_bug.cgi?id=1056276>
21:12 jag3773 joined #gluster
21:16 badone joined #gluster
21:22 gdubreui joined #gluster
21:26 y4m4 joined #gluster
21:27 y4m4 joined #gluster
21:33 y4m4 joined #gluster
21:35 pravka joined #gluster
22:04 sulky joined #gluster
22:07 eshy_ joined #gluster
22:09 eshy joined #gluster
22:25 ninkotech_ joined #gluster
22:30 khushildep joined #gluster
22:35 psyl0n joined #gluster
22:38 LoudNois_ joined #gluster
22:38 ninkotech__ joined #gluster
22:47 purpleidea JoeJulian: interesting, thanks! Would be good to find out more about those two options!
22:48 tdasilva left #gluster
22:49 XpineX joined #gluster
22:58 cyberbootje hi all, i'm having trouble with my gluster setup
22:59 semiosis hello
22:59 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:59 semiosis cyberbootje: ^
22:59 semiosis you know how it goes
22:59 jskinner_ joined #gluster
23:00 cyberbootje i have a replica 2 setup with clients connected that are VM hosts, for some reason a couple of weeks ago i got a R/O on storage 1, i rebooted, checkdisked and let gluster heal everything... all was fine without VM's going down but now i got a kernel stuck on storage 2 and for some reason this time gluster did not failover seemless....
23:01 cyberbootje all VM's went down, etc.. big mess
23:10 cyberbootje as if storage 2 became the main storage after the switch(fail of storage 1) and then the switch from 2 back to 1 did not go smooth like before...
23:10 JoeJulian Hmm, that kind of stuff was supposed to be addressed in 3.4.
23:10 JoeJulian ... and there is no "main storage". :P
23:10 cyberbootje oh boy
23:11 cyberbootje i know, just mater of speaking:P
23:11 sprachgenerator_ joined #gluster
23:11 cyberbootje i'm running 3.3.1
23:11 JoeJulian Though if the kernel is stuck, I could see that being a problem for a userspace application to recover from.
23:13 cyberbootje JoeJulian: how come? if it is stuck then it's not working right?
23:13 cyberbootje then the other storage should take over
23:14 JoeJulian Sure, after the 42 second ping-timeout
23:14 cyberbootje oh boy
23:15 JoeJulian btw... I have a blog article about preventing the R/O from happening due to a ping-timeout.
23:18 cyberbootje where?
23:18 JoeJulian http://joejulian.name
23:18 glusterbot Title: JoeJulian.name (at joejulian.name)
23:20 tryggvil joined #gluster
23:22 cyberbootje if i change the timeout, how low should i go?
23:22 JoeJulian 42 seconds, in my opinion.
23:23 cyberbootje i cannot control every VM to allow "errors=continue"
23:24 JoeJulian @lucky how long does ext4 have to block for it to go read only
23:24 glusterbot JoeJulian: https://www.kernel.org/doc/Docu​mentation/filesystems/ext4.txt
23:25 cyberbootje there are still vm's using ext3 by the way
23:25 JoeJulian same thing
23:26 JoeJulian ext2 is the same as well. The only difference is their journaling.
23:26 cyberbootje what harm can it cause if i set the timeout to 2 sec ?
23:29 JoeJulian If you have a 2 second network disconnection (like moving a connection to another switch, a BGP convergence, STP reroute) will cause a brick (or bricks) to be disconnected.
23:30 cyberbootje ok, but that's fine
23:30 cyberbootje it's all locally connected
23:30 cyberbootje that's not an issue
23:31 JoeJulian btw... when it happens I'm going to point you at this date in the channel logs. ;)
23:31 cyberbootje well
23:32 cyberbootje if it happens, then what.... it should move the connection to the other storage, right?
23:32 cyberbootje i don't see the problem...
23:34 JoeJulian It's always connected to both. What will happen is that it will drop the connection along with all the file descriptors and locks associated with that connection. (if the vm host's connection is the one with the temporary issue it'll lost ALL the bricks instead of waiting to gracefully continue an established connection). When that connection comes back, it will reestablish those. This usually causes quite a bit of cpu usage and overall slownes
23:34 JoeJulian s.
23:34 jag3773 joined #gluster
23:35 JoeJulian If that excessive cpu usage happens to make a transaction last longer than 2 seconds, it'll drop the connection and do it all over again.
23:35 cyberbootje creating a loop
23:37 sinatributos joined #gluster
23:38 JoeJulian I like to plan for worst case scenarios. Like the guy who thought it would be okay to unplug the network for just a second... drops the cord... on Friday at 4:50... after an all-nighter installing new _____... and months since you changed that setting and forgot all about it.
23:39 JoeJulian .... and I have at least a dozen ways of making that scenario worse.
23:39 JoeJulian most of which has actually happened to me at one time or another.
23:44 cyberbootje well
23:45 JoeJulian I'm not saying under no circumstances should you ever change it, just adding fuel to your planning process.
23:46 cyberbootje this is what i got on the storage: task glusterfsd:7883 blocked for more than 120 seconds.  over and over again
23:46 JoeJulian eww
23:47 gdubreui joined #gluster
23:47 cyberbootje then: soft lockup - CPU#1 stuck for 22s! [scsi_eh_1:472]
23:48 cyberbootje and a bunch of other kernel stuck/bug errors
23:49 JoeJulian scsi driver fault it would appear.
23:49 ninkotech__ joined #gluster
23:51 cyberbootje weird
23:51 cyberbootje we use SAS
23:52 cyberbootje not scsi
23:52 JoeJulian same driver base
23:53 ninkotech__ joined #gluster
23:54 cyberbootje so no bug in gluster?
23:54 JoeJulian I wouldn't guarantee there isn't, but that sure sounds like a hardware problem.
23:56 cyberbootje any known issues witch gluster - Ubuntu server ?
23:57 JoeJulian Not that I'm aware of. Double check bugzilla to be sure though.
23:57 cyberbootje thx
23:58 cyberbootje earlyer you said that there was an issue in versions before 3.4? The failover ?
23:58 JoeJulian There's a feature that's been added in 3.4 to detect timeouts that happen when a hard drive fails instead of just sitting there forever waiting for it.
23:59 cyberbootje so i don't have to update?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary