Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 T3 joined #gluster
00:06 gildub joined #gluster
00:44 jobewan joined #gluster
00:49 bala joined #gluster
00:51 PeterA joined #gluster
00:52 PeterA1 joined #gluster
00:52 PeterA1 joined #gluster
00:56 Gill joined #gluster
00:57 T3 joined #gluster
01:08 fubada hi, is DNS-RR a valid way to mount gluster shares?
01:08 fubada I have 4 hosts bound to gluster.domain.com using dns-rr
01:21 huleboer joined #gluster
01:37 and` joined #gluster
01:37 and` joined #gluster
01:45 _Bryan_ joined #gluster
01:47 T3 joined #gluster
01:55 LordFolken to replace a brick on a disperse volume, is this the correct syntax
01:55 LordFolken volume replace-brick datapoint DarkStar:/glusterfs DarkChild:/glusterfs commit force
02:06 hagarth joined #gluster
02:14 haomaiwa_ joined #gluster
02:18 RameshN joined #gluster
02:19 T3 joined #gluster
02:21 bharata-rao joined #gluster
02:22 nangthang joined #gluster
02:25 harish joined #gluster
02:39 verdurin joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 soumya_ joined #gluster
02:56 MacWinner joined #gluster
03:08 spandit joined #gluster
03:31 lalatenduM joined #gluster
03:45 kanagaraj joined #gluster
03:46 itisravi joined #gluster
03:53 nbalacha joined #gluster
03:56 hchiramm joined #gluster
03:56 bala joined #gluster
03:58 shubhendu joined #gluster
03:59 RameshN joined #gluster
04:11 ira joined #gluster
04:12 atinmu joined #gluster
04:14 rafi joined #gluster
04:15 nishanth joined #gluster
04:23 LordFolken so I executed the above command:
04:23 LordFolken gluster> volume replace-brick datapoint DarkStar:/glusterfs DarkChild:/glusterfs commit force
04:23 LordFolken volume replace-brick: success: replace-brick commit successful
04:23 LordFolken however
04:24 LordFolken root@DarkStar:~# ls -al /mnt
04:24 LordFolken ls: cannot access /mnt: Input/output error
04:29 Manikandan joined #gluster
04:31 LordFolken only a test system so nothing important on the volume yet
04:31 LordFolken it created /glusterfs on DarkChild but there is no data in it
04:37 ppai joined #gluster
04:41 anoopcs joined #gluster
04:53 meghanam joined #gluster
04:54 sakshi joined #gluster
04:57 fubada purpleidea: hi! curious to know if you had the chance to work on the vrrp folder purging in your awesome puppet module?
05:01 jiffin joined #gluster
05:02 gem joined #gluster
05:03 smohan joined #gluster
05:08 anil joined #gluster
05:10 deepakcs joined #gluster
05:13 ndarshan joined #gluster
05:16 kshlm joined #gluster
05:23 sakshi joined #gluster
05:25 sadbox joined #gluster
05:29 T3 joined #gluster
05:33 kdhananjay joined #gluster
05:35 hagarth joined #gluster
05:37 pp joined #gluster
05:43 nbalacha joined #gluster
05:43 nshaikh joined #gluster
05:51 ramteid joined #gluster
05:52 sadbox joined #gluster
05:55 dusmant joined #gluster
06:00 soumya_ joined #gluster
06:06 nangthang joined #gluster
06:21 karnan joined #gluster
06:27 lalatenduM joined #gluster
06:30 nbalacha joined #gluster
06:32 rjoseph|afk joined #gluster
06:32 dusmant joined #gluster
06:32 atalur joined #gluster
06:35 jobewan joined #gluster
06:35 glusterbot News from newglusterbugs: [Bug 1151696] mount.glusterfs fails due to race condition in `stat` call <https://bugzilla.redhat.com/show_bug.cgi?id=1151696>
06:50 nbalacha joined #gluster
06:52 aravindavk joined #gluster
06:52 kumar joined #gluster
06:55 edualbus joined #gluster
06:57 rjoseph|afk joined #gluster
07:01 gildub joined #gluster
07:05 anrao joined #gluster
07:09 andreask joined #gluster
07:14 klaas joined #gluster
07:21 nshaikh joined #gluster
07:22 jtux joined #gluster
07:25 rjoseph|afk joined #gluster
07:25 andreask joined #gluster
07:31 atalur joined #gluster
07:34 mbukatov joined #gluster
07:44 tanuck joined #gluster
07:47 purpleidea fubada: awesome (re: fact)
07:48 purpleidea fubada: i did not, sorry, i'll hopefully be hacking on it this week, but i'm a bit busy because i've got some big conference presentations coming up, and i've got to be ready for those first. keep bugging me though :)
08:10 kovshenin joined #gluster
08:11 nbalacha joined #gluster
08:19 deniszh joined #gluster
08:19 deniszh left #gluster
08:20 geaaru joined #gluster
08:21 deniszh joined #gluster
08:28 kumar joined #gluster
08:29 jvandewege_ joined #gluster
08:34 lalatenduM joined #gluster
08:34 jvandewege joined #gluster
08:36 LordFolken anybody got any suggestions as to what I did wrong
08:37 jvandewege_ joined #gluster
08:39 khanku joined #gluster
08:39 jvandewege_ joined #gluster
08:40 gothos fubada: DNS RR is quite valid :)
08:43 fsimonce joined #gluster
08:44 jvandewege joined #gluster
08:57 harish joined #gluster
08:59 xavih LordFolken: there is a bug that caused this problem. It's not merged yet, but if you can try a patch, this should solve the problem: http://review.gluster.org/9407/
08:59 xavih LordFolken: it's also highly recommended to upgrade to 3.6.2 for dispersed volumes because it solves important problems
09:05 atalur joined #gluster
09:06 atalur joined #gluster
09:06 LordFolken xavih: cheers
09:06 liquidat joined #gluster
09:07 LordFolken I'm using glusterfs from the ubuntu ppa which is 6.1
09:07 LordFolken hmm
09:07 LordFolken sorry 3.6.1
09:09 LordFolken just read it'll be updated 'soonish' thanks again
09:15 jvandewege_ joined #gluster
09:16 tanuck_ joined #gluster
09:17 jvandewege joined #gluster
09:22 kovsheni_ joined #gluster
09:22 jvandewege__ joined #gluster
09:27 jvandewege joined #gluster
09:29 anrao joined #gluster
09:29 nangthang joined #gluster
09:30 RameshN joined #gluster
09:34 TvL2386 joined #gluster
09:45 Norky joined #gluster
09:47 overclk joined #gluster
09:47 maveric_amitc_ joined #gluster
09:49 _nixpanic joined #gluster
09:49 _nixpanic joined #gluster
09:52 jvandewege_ joined #gluster
09:53 Fen1 joined #gluster
09:55 jvandewege joined #gluster
09:56 nangthang joined #gluster
09:56 T0aD joined #gluster
10:01 eightyeight joined #gluster
10:03 owlbot` joined #gluster
10:08 anrao joined #gluster
10:10 shaunm joined #gluster
10:12 khanku joined #gluster
10:25 Slashman joined #gluster
10:25 rjoseph|afk joined #gluster
10:26 xrsa joined #gluster
10:27 ppai joined #gluster
10:30 [Enrico] joined #gluster
10:34 jvandewege joined #gluster
10:40 shubhendu joined #gluster
10:49 Norky joined #gluster
10:50 harish joined #gluster
10:55 DV joined #gluster
11:05 glua joined #gluster
11:06 glua hi everyone
11:06 glua is it possible to set up glusterfs without a separate partition?
11:06 glusterbot News from resolvedglusterbugs: [Bug 1049727] Dist-geo-rep : volume won't be able to stop untill the geo-rep session is deleted. <https://bugzilla.redhat.com/show_bug.cgi?id=1049727>
11:08 haomaiwa_ joined #gluster
11:08 ctria joined #gluster
11:09 maveric_amitc_ joined #gluster
11:09 rjoseph|afk joined #gluster
11:13 purpleidea glusterbot: yep
11:13 glusterbot purpleidea: I do not know about 'yep', but I do know about these similar topics: 'yum'
11:13 purpleidea glua: yep
11:13 purpleidea glusterbot: sorry :P
11:14 purpleidea glua: any folder will do, as long as it support xattrs. xfs is recommended. ext4, btrfs should probably also work though.
11:15 glua purpleidea: can i limit the size of the volume if the folder is on the root partition of the server?
11:16 purpleidea glua: good question. probably not directly with gluster afaik, but i could be wrong. check the quota options maybe, but what you really want is more of a filesystem level quota. try that instead.
11:16 purpleidea glua: alternatively you could make a fake mount with a backing file, but it's probably a waste of performance
11:17 glua purpleidea: i have to server which data folder is synced with csync2, but this setup is to slow because the files change more often then expected, so i'd like to create a mirrored gluster volume on these server and mount them locally, will this work?
11:18 shubhendu joined #gluster
11:18 glua s/to/two
11:18 purpleidea "i have to server" ?
11:23 glua purpleidea: "two servers", sry ;)
11:25 purpleidea glua: it's not entirely clear what you're describing, sorry
11:25 purpleidea oh wait
11:25 purpleidea i think i get it
11:25 purpleidea glua: yes you can mount a gluster volume on more than one computer to use as a shared filesystem
11:25 purpleidea of course :)
11:25 purpleidea unless you are asking a different question
11:27 glua purpleidea: i just wanted to be sure, that the root partition doesn't get hurt, if i create the gluster volume on it, without using an own partition for the volume
11:27 purpleidea dont mount the volume on /
11:28 purpleidea glua: you need to experiment in a safe testing environment, and i think this will answer most of your questions. come back after you've tried gluster :)
11:29 anrao joined #gluster
11:30 glua purpleidea: the gluster volume will be mounted on /var/www, but the volume would be created on /export/volume1 (folder within / partition) so client and server runs on the same machine
11:31 purpleidea just remember not to try and access files directly from the volume backing store (in your case this looks like /export/volume1)
11:32 glua purpleidea: i think you ansered my question, thanks :) btw. did already some testing with gluster but it's some years ago and then i always used a separate partitions which is not possibe in my current setup
11:32 purpleidea yw! gl
11:32 purpleidea @next
11:32 glusterbot purpleidea: Error: You must be registered to use this command. If you are already registered, you must either identify (using the identify command) or add a hostmask matching your current hostmask (using the "hostmask add" command).
11:32 purpleidea ,,(next)
11:32 glusterbot Another satisfied customer... NEXT!
11:32 purpleidea glusterbot: sometimes you annoy me
11:37 ricky-ti1 joined #gluster
11:37 glusterbot News from newglusterbugs: [Bug 1186714] libgfapi: Add support to set lkowners in the lock requests sent to the server <https://bugzilla.redhat.com/show_bug.cgi?id=1186714>
11:37 glusterbot News from newglusterbugs: [Bug 1186713] syncop: Support to set and pass lkowner to GlusterFS server <https://bugzilla.redhat.com/show_bug.cgi?id=1186713>
11:40 nangthang joined #gluster
11:42 side_control joined #gluster
11:46 rjoseph|afk joined #gluster
11:47 anil joined #gluster
11:48 anrao joined #gluster
11:58 LebedevRI joined #gluster
11:58 Norky joined #gluster
11:58 dusmant joined #gluster
12:01 Norky joined #gluster
12:02 jdarcy joined #gluster
12:02 nbalacha joined #gluster
12:03 soumya__ joined #gluster
12:04 Norky joined #gluster
12:05 kkeithley1 joined #gluster
12:05 meghanam_ joined #gluster
12:07 meghanam_ joined #gluster
12:08 kanagaraj joined #gluster
12:11 itisravi_ joined #gluster
12:12 nbalacha joined #gluster
12:14 itisravi_ joined #gluster
12:14 shubhendu joined #gluster
12:14 Norky joined #gluster
12:15 T3 joined #gluster
12:19 tanuck joined #gluster
12:20 timbyr_ joined #gluster
12:26 Norky joined #gluster
12:27 rjoseph|afk joined #gluster
12:31 Norky joined #gluster
12:35 T3 joined #gluster
12:37 bene2 joined #gluster
12:39 Norky joined #gluster
12:39 ira joined #gluster
12:42 Norky joined #gluster
12:45 badone joined #gluster
12:48 vikumar joined #gluster
12:50 lalatenduM joined #gluster
12:51 Folken__ joined #gluster
12:54 B21956 joined #gluster
12:54 Fen1 joined #gluster
12:55 John_HPC joined #gluster
12:56 Norky joined #gluster
12:56 anoopcs joined #gluster
12:57 doekia joined #gluster
12:59 anoopcs joined #gluster
13:12 fattaneh joined #gluster
13:13 fattaneh left #gluster
13:18 hagarth joined #gluster
13:18 rjoseph|afk joined #gluster
13:19 aravindavk joined #gluster
13:24 John_HPC any news on the following bug? I still seem to have duplicate directories. Do i need to do another rebalance fix-layout?
13:24 John_HPC bug 1163161
13:24 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1163161 high, high, 3.6.2, skoduri, POST , With afrv2 + ext4, lookups on directories with large offsets could result in duplicate/missing entries
13:25 bennyturns joined #gluster
13:40 anil joined #gluster
13:58 badone_ joined #gluster
14:02 wkf joined #gluster
14:06 jmarley joined #gluster
14:08 Norky joined #gluster
14:08 rjoseph|afk joined #gluster
14:14 Norky_ joined #gluster
14:17 badone_ joined #gluster
14:22 jriano joined #gluster
14:23 DV joined #gluster
14:24 maveric_amitc_ joined #gluster
14:27 jriano joined #gluster
14:34 nbalacha joined #gluster
14:34 jvandewege_ joined #gluster
14:35 meghanam_ joined #gluster
14:37 virusuy joined #gluster
14:37 virusuy joined #gluster
14:37 glusterbot News from resolvedglusterbugs: [Bug 1158088] Quota utilization not correctly reported for dispersed volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1158088>
14:37 glusterbot News from resolvedglusterbugs: [Bug 1159484] ls -alR can not heal the disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1159484>
14:37 glusterbot News from resolvedglusterbugs: [Bug 1159498] when replace one brick on disperse volume, ls sometimes goes wrong <https://bugzilla.redhat.com/show_bug.cgi?id=1159498>
14:37 glusterbot News from resolvedglusterbugs: [Bug 1161066] A disperse 2 x (2 + 1) = 6 volume, kill two glusterfsd program, ls  mountpoint abnormal. <https://bugzilla.redhat.com/show_bug.cgi?id=1161066>
14:37 glusterbot News from resolvedglusterbugs: [Bug 1161885] Possible file corruption on dispersed volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1161885>
14:37 glusterbot News from resolvedglusterbugs: [Bug 1170515] Change licensing of disperse to dual LGPLv3/GPLv2 <https://bugzilla.redhat.com/show_bug.cgi?id=1170515>
14:37 glusterbot News from resolvedglusterbugs: [Bug 1170954] Fix mutex problems reported by coverity scan <https://bugzilla.redhat.com/show_bug.cgi?id=1170954>
14:37 glusterbot News from resolvedglusterbugs: [Bug 1170959] EC_MAX_NODES is defined incorrectly <https://bugzilla.redhat.com/show_bug.cgi?id=1170959>
14:47 polychrise joined #gluster
14:52 dgandhi joined #gluster
14:53 dgandhi joined #gluster
14:53 [Enrico] joined #gluster
14:57 shubhendu joined #gluster
14:58 theron joined #gluster
14:59 theron joined #gluster
15:00 bene2 joined #gluster
15:07 ekuric joined #gluster
15:17 wushudoin joined #gluster
15:20 Manikandan joined #gluster
15:22 Pupeno joined #gluster
15:28 hagarth joined #gluster
15:32 elico joined #gluster
15:33 plarsen joined #gluster
15:33 plarsen joined #gluster
15:38 chirino joined #gluster
15:44 roost_ joined #gluster
15:46 Fen1 Hello everyone ! :) When you have 2 glusterfs servers (replica 2) and 1 client, and for some reason in the volume you change a file, not from the client-side but from the server-side, i have strange reaction, is there something to avoid thing like this ?
15:53 badone__ joined #gluster
15:53 JoeJulian Fen1: You don't change files "server-side". If you need access to your volume on a server, mount a client.
15:54 Fen1 JoeJulian : Of course i know that, but i want to know what glusterfs will do if there is a corruption server-side :)
15:56 JoeJulian It will happily server corrupted data.
15:58 Fen1 ok, then we entirely rely on filesystem and hardware resiliency, distributed to multiple nodes.
15:59 semiosis JoeJulian: today is the last day to submit a talk for DevNation. DO IT!
16:01 lalatenduM joined #gluster
16:01 JoeJulian I don't have any prepared, nor no commitment from anyone to have anything paid for.
16:02 bala joined #gluster
16:02 T3 joined #gluster
16:03 semiosis they'll cover a night in the hotel
16:05 dusmant joined #gluster
16:06 John_HPC joined #gluster
16:07 T3 joined #gluster
16:16 chirino joined #gluster
16:26 bala joined #gluster
16:27 bene2 joined #gluster
16:31 Norky joined #gluster
16:37 andreask left #gluster
16:39 atinmu joined #gluster
16:41 nangthang joined #gluster
16:47 jobewan joined #gluster
16:51 bala joined #gluster
16:52 bennyturns joined #gluster
16:54 kanagaraj joined #gluster
16:54 gem joined #gluster
16:57 deniszh left #gluster
16:58 neofob joined #gluster
17:03 hagarth joined #gluster
17:06 deniszh joined #gluster
17:06 tetreis joined #gluster
17:08 tetreis would you guys give me tips on getting facts that prove I have a connection issue between the gluster node servers? How do you guys validate your network is healthy?
17:08 glusterbot News from newglusterbugs: [Bug 1184626] Community Repo RPMs don't include attr package as a dependency <https://bugzilla.redhat.com/show_bug.cgi?id=1184626>
17:09 JoeJulian I don't prove it. I just blame networking when something breaks and make them prove it. ;)
17:09 JoeJulian But really... you can actually telnet to the gluster ports to check for firewalls.
17:11 JoeJulian For performance, or anything more complicated, wireshark.
17:12 John_HPC you can also use tools like pingpath
17:12 John_HPC traceroute
17:12 John_HPC err tracepath, not ping path
17:14 lpabon joined #gluster
17:14 tetreis blaming networking guys is a nice strategy haha
17:15 bala joined #gluster
17:15 ricky-ticky1 joined #gluster
17:17 MacWinner joined #gluster
17:21 nishanth joined #gluster
17:28 calisto joined #gluster
17:38 t0ma joined #gluster
17:39 t0ma hello
17:39 glusterbot t0ma: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:40 lpabon joined #gluster
17:40 PeterA joined #gluster
17:40 t0ma I want to have multiple shares with quotas, should I create multiple lvs and have one brick per lv? Or should I just create one large lv with one large brick on top that has quota on subdirectories?
17:41 t0ma For example, I want one share for the blog-sites and one for the CMS systems storage and so on. How would I configure gluster for this?
17:44 PeterA i am doing quota on subdirs
17:45 PeterA but been getting quota mismatch issue which been told would be fix soon on 3.5
17:46 t0ma ok I read that from the client mounting the share you see the total space of the whole brick and not the quota you have?
17:49 JoeJulian t0ma: I used lvm and carved out bricks for multiple volumes.
17:50 t0ma JoeJulian: ok did it work ok?
17:50 JoeJulian Worked great.
17:51 JoeJulian And if I needed to grow my bricks, it was as easy as allocating more extents and extending the filesystem.
17:54 t0ma JoeJulian: ok that was the approach I was thinking about
17:54 JoeJulian With the issues PeterA's been having, that's still the way I would do it.
17:56 t0ma ok
17:56 Rapture joined #gluster
17:56 t0ma you have to grow the brick in some way after growing the lv?
17:57 t0ma or the brick will just use all the space available?
17:57 JoeJulian yes. xfs_growfs, or the tool for whatever filesystem you use.
17:57 PeterA ya…with the way i do subdir and nfs export subdirs…i can increase or decrease quota to control amount of storage, but Joe is correct on still issues on quota....
17:58 t0ma when is 3.5 due?
17:58 JoeJulian @latest
17:58 glusterbot JoeJulian: The latest version is available at http://download.gluster.org/pub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
17:58 PeterA 3.5.2 is up
17:58 PeterA but still having the quota issue
17:58 t0ma oh ok
17:58 PeterA devs told me last night that new patches will fix the quota mismatch issue
17:58 t0ma maybe i'll run some tests in vagrant boxes
17:59 gem joined #gluster
17:59 PeterA https://bugzilla.redhat.com/show_bug.cgi?id=917901
17:59 glusterbot Bug 917901: urgent, high, ---, vmallika, ASSIGNED , Mismatch in calculation for quota directory
17:59 t0ma is there any benefit of running subdirs with quotas when the quotas gets fixed? other than it's less commands to change quota?
18:00 PeterA much easier to shrink the quota?
18:00 t0ma yea but other than that there is none i guess
18:00 PeterA much easier to provision a share?
18:01 t0ma if you have one large brick and want to add more disk to the pool you need to add the same size if you want to do replication
18:01 JoeJulian reduced management complexity
18:01 JoeJulian You should, yes.
18:02 t0ma ok guys, thanks a lot for the information!
18:06 jmarley joined #gluster
18:11 neofob joined #gluster
18:27 Gill joined #gluster
18:33 t0ma left #gluster
18:39 chirino joined #gluster
18:40 ws2k3_ joined #gluster
18:46 Pupeno_ joined #gluster
18:47 ws2k3_ joined #gluster
18:50 John_HPC JoeJulian: I brought this up earlier; I think I am affected by bug 1163161. I tried an fix-layout and didnt work. I did upgrade to 3.6.2-1 today; started a fix layout, but that will take about 20 days to finish.
18:50 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1163161 high, high, 3.6.2, skoduri, POST , With afrv2 + ext4, lookups on directories with large offsets could result in duplicate/missing entries
18:50 John_HPC Is there anything else I should be doing? Here is what my logs look like: http://paste.ubuntu.com/9903927/   http://paste.ubuntu.com/9903930/
18:56 JoeJulian looking...
18:56 Rapture do I need to be concerned about this log entry "No data available occurred while creating symlinks"
18:56 Rapture I'm seeing it repeated over nd over every few seconds
18:57 JoeJulian The first one with the  "Fix layout failed for" series, what log does that come from? I would guess it's a rebalance log.
18:58 JoeJulian The second one got reformatted funny. Is that glustershd?
18:58 JoeJulian or is that rebalance as well?
18:59 John_HPC rebalance
18:59 John_HPC i got millions of those
19:00 John_HPC here is a bit more: http://paste.ubuntu.com/9922269/
19:02 JoeJulian That does look odd. Does "/mdc/Ft/Schu4/Chemicals/PrimaryScreen/20120315a/20120315a/201105415/2012-03-27/1668/H23_descriptors" exist on all your bricks? Check the ,,(extended attributed) to ensure they all have the same gfid. Also make sure they're all actually directories and not files or symlinks or something.
19:02 glusterbot I do not know about 'extended attributed', but I do know about these similar topics: 'extended attribute', 'extended attributes'
19:02 JoeJulian @extended attributes
19:02 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
19:03 JoeJulian (that would be on the bricks, of course)
19:08 edong23 joined #gluster
19:17 redbeard joined #gluster
19:22 John_HPC In the case of that example, its a folder that exists on every brick
19:22 John_HPC trusted.gfid is the same
19:23 John_HPC trusted.glusterfs.dht is different per server pair
19:23 John_HPC trusted.glusterfs.dht is different per brick
19:24 John_HPC on the same server
19:25 ekuric left #gluster
19:52 John_HPC OK
19:53 John_HPC further looking I did miss something
19:53 John_HPC the files have different dates
19:53 Pupeno joined #gluster
19:53 Pupeno joined #gluster
19:53 John_HPC but same size and sum
19:53 John_HPC trusted.afr doesn't show any different
19:53 John_HPC like they need to be healed, but gluster doesn't recognize they need to be healed
19:54 JoeJulian files?
19:54 John_HPC Yes
19:54 JoeJulian I thought we were looking at the directories.
19:55 John_HPC the directories as well are showing different dates
19:57 John_HPC that is between gluster01 and gluster02 servers; where 02 is a replicate of 01
19:57 John_HPC almost looks like I need to force gluster02 to heal from gluster01
20:01 JoeJulian Odd.
20:01 John_HPC what would be the correct way to have gluster02 rebuild its data?
20:01 Pupeno_ joined #gluster
20:02 JoeJulian The cli method is with "heal...full", but you can test to see of that would work by picking a directory and just doing "stat *" from the client.
20:02 JoeJulian Check the client logs from there and look for self-heal log entries.
20:03 JoeJulian If successful, it should say so and your files should once again match.
20:10 John_HPC stat appears to put the current date/time on both replicates
20:15 ghenry_ joined #gluster
20:19 virusuy hi gents
20:20 roost joined #gluster
20:22 Pupeno joined #gluster
20:24 PeterA1 joined #gluster
20:31 bene2 joined #gluster
20:42 John_HPC JoeJulian: thanksf or your help.
20:43 lpabon joined #gluster
20:44 deniszh joined #gluster
20:54 Pupeno_ joined #gluster
20:57 ghenry_ joined #gluster
21:00 ghenry joined #gluster
21:04 Pupeno_ joined #gluster
21:06 PeterA joined #gluster
21:06 PeterA "/dev/fuser" does not exist :(
21:07 PeterA on a RHEL glusterfs client
21:07 PeterA How do i create it?
21:09 raatti joined #gluster
21:11 PeterA when i install glusterfs-fuse-3.5.2-1.el5.x86_64.rpm on RHEL5, /dev/fuse did not get created
21:12 JoeJulian /dev/fuse comes from the fuse kernel module
21:12 PeterA oh....
21:12 PeterA from RHEL 5?
21:12 JoeJulian from the kernel
21:13 PeterA hmm
21:13 JoeJulian I don't remember if those old kernels had it built in or not.
21:13 PeterA libconfuse.x86_64                        2.5-4                  installed
21:13 JoeJulian Try, "modprobe fuse"
21:13 PeterA # modprobe fuse
21:14 PeterA FATAL: Module fuse not found.
21:14 JoeJulian Is that the latest kernel?
21:14 PeterA # uname -a
21:14 PeterA Linux jobserverprod001.bo2.shopzilla.sea 2.6.18-8.el5 #1 SMP Fri Jan 26 14:15:14 EST 2007 x86_64 x86_64 x86_64 GNU/Linux
21:15 JoeJulian I haven't used el5 in like 6 years and I wouldn't remember kernel versions anyway.
21:15 PeterA i think that's latest for RHEL5
21:16 semiosis wow 2007!
21:17 semiosis time to upgrade
21:17 PeterA ya i know
21:23 JoeJulian PeterA: yum install fuse
21:23 PeterA ihttp://pastie.org/9869298
21:23 PeterA http://pastie.org/9869298
21:24 JoeJulian Odd. It's in the rhel 5 source rpms.
21:24 JoeJulian Contact Red Hat.
21:25 PeterA how come i didn't have to do that on centos?
21:25 krueckel joined #gluster
21:25 * JoeJulian shrugs
21:25 T3 Does anyone here monitor and automatically fix split-brains? In my scenario, with our rules, I can think of a way of doing that. Just asking if someone is already doing it, or if it would exist anything to be worried about.
21:25 JoeJulian Because it's in the source rpms for RHEL?
21:26 JoeJulian T3: I think it's a much safer idea to prevent split-brain. The devs are working on rule-based selection, though.
21:27 JoeJulian PeterA: Does the .sea TLD refer to my neighborhood, Seattle?
21:27 T3 JoeJulian: completely agree. I'm working with network guys. Having a healthy network is enough?
21:27 T3 monitoring would be good anyways.
21:28 JoeJulian I totally agree with monitoring. Split-brain should be so seldom that it's more expensive to develop around it than it is to address it when it happens.
21:29 semiosis by definition, split brain is the case when it *can't* be resolved automatically
21:30 T3 cool
21:31 T3 thank you, guys
21:33 JoeJulian semiosis, T3: If you guys want to add your input, Pranith seems interested in this again. http://www.gluster.org/pipermail/gluster-users/2015-January/020376.html
21:34 semiosis NSR
21:34 semiosis http://blog.gluster.org/2014/04/new-style-replication/
21:35 semiosis as jdarcy says at the end, "That's the plan to (almost completely) eliminate the split-brain problems that have been the bane of our users' existence, while also adding flexibility and improving performance in most cases."
21:37 T3 will read and reply
21:37 T3 thanks for being a welcoming community :P
21:46 MacWinner joined #gluster
21:59 sage_ joined #gluster
22:00 DJClean joined #gluster
22:02 DJClean joined #gluster
22:07 dgandhi joined #gluster
22:10 neofob joined #gluster
22:24 plarsen joined #gluster
22:25 neofob left #gluster
22:30 siel joined #gluster
22:38 cfeller what is the status of glusterfs-nagios? it seems that the repo here: http://download.gluster.org/pub/gluster/glusterfs-nagios/ hasn't been updated since creation.  There are a few bugs that I ran into while trying to use the packages from there.  Ultimately I tracked down the bugzillas, pulled the srpms from RHS (most recent srpms are from December), and rebuilt them, and now the plugin works great.
22:39 cfeller just kind of surprised that the patches weren't pushed back upstream (or if they were, they don't seem to have been posted and/or packaged into the repo).
22:39 calum_ joined #gluster
22:42 JoeJulian Check the ,,(forge)
22:42 glusterbot http://forge.gluster.org
22:43 JoeJulian But yes, if the changes are not upstream, someone needs to get that done over there. They're mantra is upstream first.
22:49 cfeller hmm... glusterfs-nagios doesn't seem to be a project on the forge. (There is an older nagios project on there, but it hasn't been active in over a year.)
22:54 JoeJulian cfeller: Is it this one? https://forge.gluster.org/nagios-monitoring/nagios-monitoring
22:55 cfeller That is the older one that looks like it hasn't been active in over a year.
22:59 ws2k3_ joined #gluster
23:03 JoeJulian cfeller: Well, what I would do is email Sahina Bose  <sabose@redhat.com> and ask him where the upstream repo is.
23:03 cfeller I don't know if this is useful at all, but of the packages in that glusterfs-nagios repo I mentioned above, the packages out of there appear to have been built on Redhat servers, unlike most of the other packages that have a Fedora signature: http://ur1.ca/jkhhj  I don't know if that gives any clues or not as to where the packages is.
23:03 JoeJulian Yeah, it's not in the fedora repo, nor is it built on koji.
23:04 cfeller Yup, koji was the first place I checked.
23:04 cfeller I'll email him and let you know what he says and/or if we can get the repo updated (did Sahina create the repo?).
23:04 JoeJulian No, Bala created it. But Sahina's been the most active.
23:05 JoeJulian According to the changelog
23:06 siel joined #gluster
23:14 lanning joined #gluster
23:16 siel joined #gluster
23:20 jmarley joined #gluster
23:21 Gill joined #gluster
23:22 deniszh joined #gluster
23:24 gildub joined #gluster
23:31 wkf joined #gluster
23:34 LordFolken joined #gluster
23:41 atoponce joined #gluster
23:42 atoponce so, i seem to have gotten myself into a split-brain with 3.2 on debian stable. not sure how to get out of it
23:42 atoponce this doesn't seem to create the mountpoints as documented: https://github.com/joejulian/glusterfs-splitbrain
23:43 atoponce looking at http://blog.gluster.org/2012/06/healing-split-brain/, it links to a Github repo, but 'healer.py' does not work as documented
23:44 atoponce returns 'RuntimeError "text outside volume definition"'
23:49 atoponce hmm. i'm using zfs underneath, and i have good snapshots
23:49 atoponce maybe i should just delete the data from the client mount, and restore the image from a good snapshot on one of the client mount points
23:51 Pupeno joined #gluster
23:53 atoponce yeah. i think this will work
23:58 atoponce yay
23:58 atoponce zfs saves me yet again
23:59 atoponce <3

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary