Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 MugginsM joined #gluster
00:31 calisto joined #gluster
00:59 calisto joined #gluster
01:06 plarsen joined #gluster
01:09 psi_ joined #gluster
01:18 wkf joined #gluster
01:19 Rapture joined #gluster
01:19 coredump joined #gluster
01:35 bala joined #gluster
01:48 _Bryan_ joined #gluster
01:55 bala joined #gluster
02:00 nangthang joined #gluster
02:17 julim joined #gluster
02:25 RameshN joined #gluster
02:31 bharata-rao joined #gluster
02:45 hagarth joined #gluster
02:56 nbalacha joined #gluster
03:14 badone joined #gluster
03:28 gildub joined #gluster
03:28 coredump joined #gluster
03:34 rjoseph joined #gluster
03:44 frankS2 joined #gluster
03:51 fandi joined #gluster
03:52 bala joined #gluster
03:53 RameshN joined #gluster
03:55 smohan joined #gluster
04:04 suman_d joined #gluster
04:09 clane joined #gluster
04:10 shubhendu joined #gluster
04:10 frankS2 joined #gluster
04:16 Manikandan joined #gluster
04:26 diegows joined #gluster
04:26 meghanam joined #gluster
04:36 anoopcs joined #gluster
04:38 harish joined #gluster
04:40 spandit joined #gluster
04:41 nbalacha joined #gluster
04:43 sakshi joined #gluster
04:51 anil joined #gluster
04:51 nbalacha joined #gluster
04:54 anoopcs joined #gluster
04:56 rafi joined #gluster
05:00 nishanth joined #gluster
05:06 gem joined #gluster
05:07 lalatenduM joined #gluster
05:10 anoopcs left #gluster
05:12 soumya joined #gluster
05:13 ndarshan joined #gluster
05:18 anoopcs joined #gluster
05:20 nbalachandran_ joined #gluster
05:21 anoopcs joined #gluster
05:22 atinmu joined #gluster
05:24 anoopcs joined #gluster
05:26 kanagaraj joined #gluster
05:29 hagarth joined #gluster
05:31 dusmant joined #gluster
05:33 anoopcs joined #gluster
05:34 kdhananjay joined #gluster
05:37 bala joined #gluster
05:43 flu_ joined #gluster
05:48 nshaikh joined #gluster
05:49 pp joined #gluster
05:50 raghu joined #gluster
05:50 jiffin joined #gluster
05:56 tom[] joined #gluster
06:01 glusterbot News from resolvedglusterbugs: [Bug 1182514] Force add-brick lead to glusterfsd core dump <https://bugzilla.redhat.com/show_bug.cgi?id=1182514>
06:02 ramteid joined #gluster
06:03 ndarshan joined #gluster
06:13 aravindavk joined #gluster
06:19 kshlm joined #gluster
06:22 kumar joined #gluster
06:26 MacWinner joined #gluster
06:34 dusmant joined #gluster
06:37 hagarth joined #gluster
06:49 mbukatov joined #gluster
06:53 nbalachandran_ joined #gluster
06:56 thangnn_ joined #gluster
07:08 purpleidea joined #gluster
07:08 purpleidea joined #gluster
07:13 deepakcs joined #gluster
07:15 ndarshan joined #gluster
07:19 thangnn_ joined #gluster
07:22 jtux joined #gluster
07:24 dusmant joined #gluster
07:29 atalur joined #gluster
07:30 lezo__ joined #gluster
07:55 bjornar joined #gluster
08:01 aravindavk joined #gluster
08:02 glusterbot News from newglusterbugs: [Bug 1184358] glfs_set_volfile_server() should accept NULL as transport <https://bugzilla.redhat.com/show_bug.cgi?id=1184358>
08:09 rafi joined #gluster
08:15 nishanth joined #gluster
08:16 hagarth joined #gluster
08:16 [Enrico] joined #gluster
08:24 ppai joined #gluster
08:25 calum_ joined #gluster
08:32 glusterbot News from newglusterbugs: [Bug 1166278] backport fix for bug 1010241 to 3.4 <https://bugzilla.redhat.com/show_bug.cgi?id=1166278>
08:36 fsimonce joined #gluster
08:36 atalur joined #gluster
08:37 liquidat joined #gluster
08:40 Slashman joined #gluster
08:45 ndarshan joined #gluster
08:46 rtalur_ joined #gluster
08:47 sakshi_bansal joined #gluster
08:48 shubhendu joined #gluster
08:48 dusmant joined #gluster
08:50 rafi1 joined #gluster
08:51 jiffin1 joined #gluster
08:58 rgustafs joined #gluster
09:02 glusterbot News from newglusterbugs: [Bug 1184366] make sure pthread keys are used only once <https://bugzilla.redhat.com/show_bug.cgi?id=1184366>
09:06 ppai joined #gluster
09:12 thangnn_ joined #gluster
09:19 T0aD joined #gluster
09:20 fandi joined #gluster
09:22 ndarshan left #gluster
09:25 rafi joined #gluster
09:37 karnan joined #gluster
09:42 hagarth joined #gluster
09:53 dusmant joined #gluster
09:59 DV joined #gluster
10:02 glusterbot News from newglusterbugs: [Bug 1184387] error: line 135: Unknown tag:     %filter_provides_in /usr/lib64/glusterfs/%{version}/ <https://bugzilla.redhat.com/show_bug.cgi?id=1184387>
10:02 glusterbot News from newglusterbugs: [Bug 1184393] [SNAPSHOT]: glusterd server quorum check is broken for snapshot commands. <https://bugzilla.redhat.com/show_bug.cgi?id=1184393>
10:05 ppai joined #gluster
10:10 Debloper joined #gluster
10:10 vikumar joined #gluster
10:14 peem joined #gluster
10:17 peem Hi. I'm trying to understand glusterfs to some extend being new to it, and stumbled upon something in test case I don't understand and don't know how to fix. Namely, I have test cluster with two nodes and crashed one (fresh system installed). Now, I can add the brick from the new system to volume, however I can't get the data synced.  Anybody here able to assist ?
10:18 deniszh joined #gluster
10:23 flu_ joined #gluster
10:24 mrEriksson peem: There are instructions in the documentation on how to do this. Iirc, it is mostly just adding the brick and trigger healing.
10:26 [Enrico] joined #gluster
10:36 Fen1 joined #gluster
10:38 lalatenduM joined #gluster
10:44 tryggvil joined #gluster
10:48 harish joined #gluster
10:48 dusmant joined #gluster
10:55 soumya joined #gluster
10:59 peem mrEriksson: Yeah, that is the problem, heal says it has finished successfully, yet there is no files in the brick. I would expect that new brick will retrieve files from the other brick in the volume, but this is not happening no mattr what I do.
11:01 nishanth joined #gluster
11:02 glusterbot News from newglusterbugs: [Bug 1184417] Segmentation fault in locks while disconnecting client <https://bugzilla.redhat.com/show_bug.cgi?id=1184417>
11:13 chirino joined #gluster
11:21 rafi1 joined #gluster
11:22 pp joined #gluster
11:29 Norky joined #gluster
11:36 gem joined #gluster
11:43 soumya joined #gluster
11:52 overclk joined #gluster
11:53 hagarth joined #gluster
11:53 soumya_ joined #gluster
11:55 ubungu joined #gluster
11:57 ctria joined #gluster
11:58 JustinClift peem: Hmmm, that sounds weird.  Maybe ask on the mailing list if you don't get an in-depth answer here?
11:58 JustinClift It'd be helpful to get the info of the Gluster volume layout too, just in case it's a distributed volume instead of a replicated one or something. :)
11:59 jmarley joined #gluster
11:59 meghanam joined #gluster
12:01 jdarcy joined #gluster
12:03 saltsa joined #gluster
12:04 ubungu joined #gluster
12:08 suman_d joined #gluster
12:10 peem JustinClift: http://apaste.info/q6C
12:10 Manikandan joined #gluster
12:11 JustinClift Well, that's definitely a replicate volume.  It's weird that it's not replicating the files.
12:11 peem JustinClift: Let me know if there's another command that would provide more info.
12:12 JustinClift peem: I'm not that technical any more tho, so I'm not really the right person to assist here tho :/
12:13 ppai joined #gluster
12:14 peem JustinClift: to explain it more, it is a test scenario, where system was built with two nodes and one then was shut down and rebuild to simulate hardware crash. I had to remove peer files with old uuid at some point i believe, but I can see that new uuid is used now and bricks seem to be fine.
12:14 peem JustinClift: Ah, fair enough :D
12:15 peem any chances that my question will be answered here, or should I rather go straight for mailing list
12:15 nbalacha joined #gluster
12:17 kkeithley People like JoeJulian, semiosis, and partner are usually very helpful, but they don't become active until later in the day. You can ask on the mailing list or try again in a few hours.
12:18 partner 2PM here already but been busy with other stuff since morning
12:25 booly-yam-6137 joined #gluster
12:31 ira joined #gluster
12:32 peem kkeithley: thanks for the info
12:32 kkeithley yw
12:32 peem partner: looking forward for when you free :)
12:34 partner peem: what exactly instructions were you following to replace a crashed server?
12:34 coredump joined #gluster
12:34 ppai joined #gluster
12:38 peem partner: I'm affraid I did not followed any instructions. It is a test case where I'm trying to apporach things with logic only to see if I can work only from basic knowledge.
12:39 partner so umm you have one brick full of data and the crashed/new one now without any data?
12:39 peem partner: yes, that is correct.
12:40 partner and any of the "heal" commands you've run don't indicate anything to require any healing? what exactly version are we btw talking about ?
12:40 chirino joined #gluster
12:41 peem partner: glusterfs 3.6.1
12:42 soumya joined #gluster
12:42 partner do you have all the volume data on your new box under /var/lib/glusterd ? thought i don't understand the output stating everything would be online and so forth. do you have the brick process up in the new box?
12:42 soumya joined #gluster
12:42 partner would really help to know what exactly was done but i can keep asking random things until i can fully understand the situation
12:43 partner did you review this already in case it would hint you to perform some additional tasks still: http://www.gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
12:46 badone joined #gluster
12:47 diegows joined #gluster
12:48 peem partner: I have removed peer from pool with peer detach command, added it back (with all hostnames in fact) with peer probe. At this point I have removed the peers file with old uuid, then attempted to add the brick. It has worked before, but now following it again I'm getting "volume add-brick: failed: Host gluster1 is not in 'Peer in Cluster' state". Must be something I missed. I will follow the guide now to see if it helps.
12:49 ubungu joined #gluster
12:51 partner hmm add-brick? this is different now from replacing a crashed server
12:51 partner or did you remove the brick previously and made the volume a single brick distributed one?
12:52 partner but the peer thing is first thing to resolve anyways
12:52 B21956 joined #gluster
12:53 jiffin joined #gluster
12:53 partner something weird here, you said you peered but yet the server isn't in the peer (what exactly is the status with gluster peer status?)
12:53 peem partner: yes, I did remove the crashed brick first and set replica to 1.... it seems logical that this should work, I'm remowing crashed brick and addin new one even if i'm re-using the ip and hostname.
12:54 partner ah ok that explains
12:54 partner check the peering first anyways and lets ocntinue from there if its all fine
12:56 nbalacha joined #gluster
12:56 peem partner: I have detached and probed peer again, now add-brick claims that "is already part of a volume"
12:56 suman_d joined #gluster
12:56 peem partner : but it is not.
12:58 partner peem: i assume you didn't wipe that brick so there are attributes still stating it used to belong to a volume
12:58 partner this should help:
12:58 partner setfattr -x trusted.glusterfs.volume-id $brick_path
12:58 partner setfattr -x trusted.gfid $brick_path
12:58 partner rm -rf $brick_path/.glusterfs
12:59 peem partner: Ah. I did rm .glusterfs, but not the attributes one.
12:59 ubungu joined #gluster
12:59 anoopcs joined #gluster
13:00 Fen1 joined #gluster
13:01 lalatenduM joined #gluster
13:03 glusterbot News from newglusterbugs: [Bug 1184460] (glusterfs-3.6.3) GlusterFS 3.6.3 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1184460>
13:03 partner peem: this approach might work with 2 bricks but if you have say 6 of them in replica 2 for example this is wrong way of doing it
13:04 meghanam joined #gluster
13:04 shubhendu joined #gluster
13:06 peem partner: Ok, I understand that replica is not number of bricks... but then I think in a replica 2 there must be at least 2 bricks, so I would be able to remove one of the 6 bricks without any issues, right ?
13:07 partner no
13:07 partner you will need to remove 3 bricks at the same time if you go with "replica 1" command
13:08 partner same as you add them, replica 2 with 6 bricks can only grow with 2 bricks
13:10 peem partner : so, this is confusing ... why there seem to be no way to actually replace failed brick other that tricking the system that the new brick is same as the old one, and pretend nothing happened..
13:12 partner i agree it could be more straight-forward
13:12 peem if I have 6 bricks in replica 2 volume, I need to remove at least two and then add two, right ? but that where potential data loss may be if I happen to remove wrong bricks.... There seem to be little in the system to ease that ...
13:13 dusmant joined #gluster
13:13 partner IMO your approach is wrong with playing the replica count when trying to recover from problem state
13:14 partner and IMO some "replace-brick" would be most nice to hint the volume we really want to replace something
13:15 partner but i cannot answer to any questions on how or why its like it is
13:15 peem partner: Agree. But I did it only because most logical thing is to replace the failed brick (not possible in crash situation) and I thought I'm forced to change replica to 1.
13:16 elico joined #gluster
13:17 partner most logical does not guarantee you are doing anything correctly i'm afraid. its better to stop and figure out proper means than proceed without knowing what to do, often causes more damage
13:17 peem partner: I followed the raid approach, where you adding new drive as a hot spare and removing old one even if it is void after hardware crash. Maybe this was wrong ...
13:17 partner now if you go back to replica 2 you will need to clean up all the bricks that were previously part of that volume as there are attributes around preventing you from adding them back
13:18 peem partner:  Fair enough... I tend to learn and test things by breaking them following some sort of posible scenario and then fix them from that.
13:18 partner peem: i really personally would want to use replace-brick for operation and another in certain situations ie. use the gluster tools to do these sorts of operations
13:19 partner if i for example want to move a brick of a replica to another box (just one, not all of them) the suggested method is to "crash" the brick i want to move and then glue it back up on the new location
13:19 LebedevRI joined #gluster
13:20 partner that just leaves me vulnerable with one single replica brick until its rebuilt to new location, might take quite a while if there's say 20 terabytes of data
13:20 peem partner: I did cleaned attributes and then adding the brick did succeded. But still can't see anything being rebuild onto that brick...
13:21 partner so you now have again replica 2 volume ?
13:22 booly-yam-6137 joined #gluster
13:22 peem partner: yes, however nothing is being replicated
13:25 partner ok, how does your /var/lib/glusterd look like? is it filled with peer/volume/etc data?
13:27 peem partner: there is some files in it, and it looks limilar to what is there on the original node.
13:28 partner have you issued any healing commands?
13:28 partner its not exactly flooding your stuff there in a quickest possible way
13:30 peem partner: I didn't run any healing comands now, but did heal full and find on mount before... There is about 1.1Gb of test data, one of which is a 1GB file.  I would expect to see something...
13:36 partner find with some stat operation from a client mount (not straight to brick) ?
13:37 bala joined #gluster
13:37 bene2 joined #gluster
13:37 peem partner: yes, on third server where it is mounted for usage, I have run "find . | xargs stat"
13:48 edwardm61 joined #gluster
13:48 ppai joined #gluster
13:50 peem partner: I had run heal full, then on new node sync from old node, then heal full again. All commands ran fine, claiming they were succesfull, but still no files present on old node (brick) seems to be showing up on new node (brick)
13:50 hagarth joined #gluster
13:51 ctria joined #gluster
13:51 booly-yam-6137 joined #gluster
13:51 bene_wfh joined #gluster
13:57 partner peem: what do the logs say, especially the glustershd ?
13:59 shubhendu joined #gluster
14:08 klaxa|work joined #gluster
14:09 klaxa|work hi, we have a setup with glusterfs and qemu/kvm with libvirt. we want to use gfapi to increase disk-speed but also be able to migrate the machines and be able to change the glusterfs host during that. is that possible?
14:09 booly-yam-6137 joined #gluster
14:09 klaxa|work we have tested and compared gfapi to our fuse-mount
14:10 klaxa|work migrating while keeping the same glusterfs host also works, but it would be optimal if the machine could also "migrate" the gfapi host
14:13 atalur joined #gluster
14:14 virusuy joined #gluster
14:26 dgandhi joined #gluster
14:30 plarsen joined #gluster
14:51 lalatenduM joined #gluster
14:52 kkeithley joined #gluster
14:53 bala joined #gluster
14:55 booly-yam-6137 joined #gluster
14:57 meghanam joined #gluster
14:58 peem partner: Sorry, had a meeting. content of sh logs on both servers : http://apaste.info/KDs
14:59 jmarley joined #gluster
15:02 Gill joined #gluster
15:07 kanagaraj joined #gluster
15:07 neofob joined #gluster
15:09 bene2 joined #gluster
15:13 lpabon joined #gluster
15:13 tdasilva joined #gluster
15:17 wushudoin joined #gluster
15:19 ricky-ticky joined #gluster
15:19 _dist joined #gluster
15:22 RameshN joined #gluster
15:23 sputnik13 joined #gluster
15:29 ron-slc joined #gluster
15:29 bala joined #gluster
15:36 jdarcy joined #gluster
15:37 dbruhn joined #gluster
15:37 B21956 joined #gluster
15:42 Telsin joined #gluster
15:46 harish joined #gluster
15:56 vimal joined #gluster
15:57 fandi joined #gluster
16:03 glusterbot News from newglusterbugs: [Bug 1184528] Some newly created folders have root ownership although created by unprivileged user <https://bugzilla.redhat.com/show_bug.cgi?id=1184528>
16:04 hagarth joined #gluster
16:05 bala joined #gluster
16:09 meghanam joined #gluster
16:10 bennyturns joined #gluster
16:21 coredump joined #gluster
16:21 codex joined #gluster
16:21 anoopcs joined #gluster
16:22 shubhendu joined #gluster
16:28 bene2 joined #gluster
16:30 lmickh joined #gluster
16:59 MacWinner joined #gluster
17:03 hchiramm joined #gluster
17:07 neofob left #gluster
17:10 lalatenduM joined #gluster
17:12 joey88fslk joined #gluster
17:14 joey88fslk greetings everyone. I've scoured the docs and an't quite seem to find an answer. If I have 4 servers each with 4 drives in them, and I want to make a 2x replica, how do I configure the bricks?  One line in the docs said if you add the bricks to a volume in the wrong order you could end up having a replica on the same node, defeating redundancy.
17:15 JoeJulian ~brick order | joey88fslk
17:15 joey88fslk Is it better to just go raid0 and make one brick per node?
17:15 glusterbot joey88fslk: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
17:16 JoeJulian And whether to raid0 or not depends on your SLA/OLA requirements.
17:17 booly-yam-6137 joined #gluster
17:18 joey88fslk Thanks. So in the create volume command, the bricks are grouped in order, with the size of the groups being the number of replicas?
17:18 JoeJulian Correct
17:19 joey88fslk So replicas are permanent pairs then.
17:19 JoeJulian yes
17:20 joey88fslk Oh, that answers my question perfectly. Thanks so much!
17:20 JoeJulian You're welcome.
17:25 badone joined #gluster
17:26 theron joined #gluster
17:31 hchiramm joined #gluster
17:46 Rapture joined #gluster
18:05 badone joined #gluster
18:14 PeterA joined #gluster
18:15 PeterA i am working on setfattr -n trusted.glusterfs.quota.size -v $TARGET_DIR
18:15 PeterA but doens't seems like it take it
18:15 PeterA i noticed setfattr -x trusted.glusterfs.quota.size $TARGET_DIR able to reset the directory quota
18:15 PeterA wonder how i can manually align with the du
18:16 PeterA i see that happen on 3.5.3
18:16 PeterA but i am at 3.5.2 now....
18:17 neofob joined #gluster
18:25 PeterA is there a way to manually setfattr to trusted.glusterfs.quota.size ??
18:28 jobewan joined #gluster
18:32 JoeJulian PeterA: You've got a bugzilla open for this problem, don't you?
18:32 PeterA yes
18:33 PeterA but wasn't got updated i think
18:33 nshaikh joined #gluster
18:33 JoeJulian Do you have the id?
18:33 PeterA let me dig them out
18:34 glusterbot News from newglusterbugs: [Bug 1184587] rename operation failed on disperse volume with glusterfs 3.6.1 <https://bugzilla.redhat.com/show_bug.cgi?id=1184587>
18:38 hchiramm joined #gluster
18:44 PeterA 1023134
18:44 PeterA 917901
18:53 sage_ joined #gluster
18:55 vikumar joined #gluster
18:56 PeterA should there be a way to set the trusted.glusterfs.quota.size ?
18:56 JoeJulian setfattr?
18:57 PeterA ya
18:57 PeterA i tried but doesn't seems like it takes it
18:57 PeterA i tried this" setfattr -n trusted.glusterfs.quota.size -v $TARGET_DIR"
18:57 JoeJulian You need to put a value after -v
18:58 PeterA i did
18:58 PeterA setfattr -n trusted.glusterfs.quota.size -v 0x000000091415de00 SOX
18:58 PeterA # getfattr -n trusted.glusterfs.quota.size -e hex SOX
18:58 PeterA # file: SOX
18:58 PeterA trusted.glusterfs.quota.size=0x0000000000000000
18:59 JoeJulian And you're doing that on the brick directly?
18:59 PeterA nope
18:59 JoeJulian Oh, that's why.
18:59 PeterA over gfs mount
18:59 JoeJulian That's filtered out.
18:59 JoeJulian You should have gotten an EPERM, I thought.
18:59 PeterA i was able to do setfattr -n trusted.glusterfs.quota.size -x SOX
19:03 PeterA interesting...
19:03 PeterA when i did that on brick
19:03 PeterA it summed to 4 times....
19:05 PeterA so seems like we need to do the du on the brick and manually set the usage
19:06 JoeJulian bug 1023134
19:06 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1023134 unspecified, unspecified, ---, bugs, NEW , Used disk size reported by quota and du mismatch
19:06 JoeJulian bug 917901
19:06 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=917901 urgent, high, ---, bugs, NEW , Mismatch in calculation for quota directory
19:07 JoeJulian (I'm too lazy to actually copy/paste)
19:07 PeterA ya so i did some update and still waiting...
19:09 JoeJulian I've poked. Should hear something tonight.
19:09 PeterA thanks!
19:10 bene2 joined #gluster
19:54 coredump joined #gluster
19:57 MugginsM joined #gluster
19:57 diegows joined #gluster
20:15 MugginsM joined #gluster
20:21 ildefonso joined #gluster
20:38 _dist joined #gluster
20:43 dbruhn The yum package from the community repo for CentOS 7 doesn't include the attr package, which is needed. Not sure who's maintaining that right now.
20:47 JoeJulian dbruhn: Could you file a bug?
20:47 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:48 dbruhn of course :) just wasn't sure if the maintainer was in here.
20:48 fandi joined #gluster
20:49 JoeJulian I can't think off the top of my head who's doing that, but I'll bug them after lunch. It'll have to have a bug and patch anyway, though.
20:51 dbruhn Bug is filed #1184626
20:53 fandi joined #gluster
21:03 ilbot3 joined #gluster
21:03 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
21:04 glusterbot News from newglusterbugs: [Bug 1184626] CentOS Community Repo doesn't include attr package as a dependency <https://bugzilla.redhat.com/show_bug.cgi?id=1184626>
21:04 glusterbot News from newglusterbugs: [Bug 1184627] CentOS Community Repo doesn't include attr package as a dependency <https://bugzilla.redhat.com/show_bug.cgi?id=1184627>
21:07 elico1 joined #gluster
21:09 jmarley joined #gluster
21:16 coredump joined #gluster
21:20 tdasilva joined #gluster
21:23 wkf joined #gluster
21:30 _dist JoeJulian: you around? And if so I was wondering at the progress of !full replace on a gluster heal of a big file (like a VM image). I don't see anything in the 3.6x release notes about it but I feel like someone told me that was coming :)
21:32 bennyturns joined #gluster
21:36 partner hmm i think debian was lacking attr dependency too, i recall i had to install it manually once something started to complain about lack of it, i wish i remembered what was it
21:37 Gill joined #gluster
21:38 dbruhn it's when the fuse client tries to mount a share
21:41 partner yeah, just puzzled where i just saw that, can't remember doing any new mounts anywhere..
21:41 dbruhn was it after upgrading?
21:42 partner i just cannot remember but i haven't upgraded anything for some time
21:44 partner i can mount without any complaints without attr installed
21:45 dbruhn is there no netfs package/service in centos7?
21:47 noddingham joined #gluster
21:47 fandi joined #gluster
21:47 noddingham left #gluster
21:54 B21956 joined #gluster
21:56 dgandhi joined #gluster
22:15 vimal joined #gluster
22:21 dbruhn anyone around here using teaming? having problems with my mounts not coming up on boot, and was wondering if with teaming I need to set the linkdelay in the config file for the team or for the interfaces themselves
22:22 mrEriksson dbruhn: I have similar problems
22:23 dbruhn I've put the mounts in the rc.local file in the past when this is an issue, but I am trying to use CTDB with this system, and need the mounts to be up for it to work properly if I am putting the shared lock files on the storage
22:23 mrEriksson Though, mostly because SLES doesn't support the _netdev option :P
22:24 dbruhn _netdev uses a static list of protocols that it will actually work with, glusterfs is not on that list.
22:24 mrEriksson Correct
22:25 dbruhn seems to be a problem on all distros sadly
22:25 mrEriksson But IIRC, SLES doesn't support _netdev at all
22:25 dbruhn ahh
22:25 mrEriksson They have static configuration for nfs and a few others
22:26 dbruhn I am using CentOS 7 on this project
22:27 calisto joined #gluster
22:27 booly-yam-6137 joined #gluster
22:30 jbrooks joined #gluster
22:33 mrEriksson Ah, all CentOS I've got are virtualized, so no teaming/bonding for interfaces
22:34 dbruhn I am trying to provide storage for xen on this one.
22:34 vimal joined #gluster
22:37 dbruhn The way this is working, looks like I am creating my own start up script in rc.local, to make sure it all comes up in the right order and can mount *sigh*
22:43 neofob left #gluster
22:52 Gill joined #gluster
22:55 partner *finally* i found where i produced this: WARNING: getfattr not found, certain checks will be skipped..
22:57 partner should probably file a bug for debian aswell, can't remember seeing this with earlier version but 3.6.1 surely complains
22:57 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
23:00 PeterA http://pastie.org/9849462
23:00 PeterA what does the dist layout mismatching means in nfs.log?
23:01 PeterA http://pastie.org/9849468
23:01 PeterA and the stale NFS file handle....
23:01 Gill joined #gluster
23:04 booly-yam-6137 joined #gluster
23:06 calisto joined #gluster
23:13 Gill_ joined #gluster
23:13 partner PeterA: have you performed fix-layout any time lately, perhaps after modifying the volume with new bricks or so?
23:14 PeterA no mods
23:14 partner well since the last mod?
23:15 partner its only INFO but hints that the layout isn't exactly as it should be
23:16 partner had that on my clients flooding them to death until i ran the fix-layout that took over a month. once it was finished the clients went silent on any such messages
23:18 partner that happens when bricks are added or removed, the hash ranges needs to be assigned again according to change
23:18 PeterA ic...
23:18 partner (or whatever is the right wording for this)
23:18 JoeJulian yeah, that's pretty accurate
23:19 PeterA there were no layout changes or add bricks what so ever....
23:19 JoeJulian I think that if a distribute subvolume is down when a directory is created (if it allows that even) it might cause that, but I don't know for sure.
23:20 partner could be even old change since long ago?
23:20 PeterA how should i run the fix layouyt?
23:21 partner gluster volume rebalance <volname> fix-layout start
23:21 JoeJulian If you can. If not, there's a trusted.glusterfs.dht.something that can fix just one directory.
23:21 partner that will touch the layout but not migrate files around to where they should be according to their hashes
23:21 JoeJulian @fix layout
23:21 JoeJulian @rebalance
23:21 glusterbot JoeJulian: I do not know about 'rebalance', but I do know about these similar topics: 'replace'
23:22 JoeJulian @meh
23:22 glusterbot JoeJulian: I'm not happy about it either
23:22 JoeJulian @layout
23:22 glusterbot JoeJulian: I do not know about 'layout', but I do know about these similar topics: 'targeted fix-layout'
23:22 JoeJulian @targeted fix-layout
23:22 glusterbot JoeJulian: You can trigger a fix-layout for a single directory by setting the extended attribute distribute.fix.layout to any value for that directory. This is done through a fuse client mount. This does not move any files, just fixes the dht hash mapping.
23:22 JoeJulian Tada!
23:23 tdasilva joined #gluster
23:24 JoeJulian @change "targeted fix-layout" 1 s/distribute.fix/trusted.distribute.fix/
23:24 glusterbot JoeJulian: Error: The command "change" is available in the Factoids, Herald, and Topic plugins.  Please specify the plugin whose command you wish to call by using its name as a command before "change".
23:24 JoeJulian @factoids change "targeted fix-layout" 1 s/distribute.fix/trusted.distribute.fix/
23:24 glusterbot JoeJulian: The operation succeeded.
23:24 JoeJulian @targeted fix-layout
23:24 glusterbot JoeJulian: You can trigger a fix-layout for a single directory by setting the extended attribute trusted.distribute.fix.layout to any value for that directory. This is done through a fuse client mount. This does not move any files, just fixes the dht hash mapping.
23:24 partner there was a bug report of attr not being dependency of centos packages so i went and filed same for debian, the attr is just so much needed too with gluster i see no point not having it around by default
23:25 partner was also easyfix and triaged, i want the same :)
23:25 JoeJulian It's good to want things. Builds character.
23:26 booly-yam-6137 joined #gluster
23:26 partner yeah, oh those disappoinments :)
23:26 elico joined #gluster
23:26 partner +t, anyways makes sense to both sides follow the same tracks on these sorts of things
23:27 JoeJulian Yes.
23:27 partner also cuts down "i don't have getfattr in my system"
23:28 JoeJulian I've only seen that once. The admin wasn't allowed to install packages.
23:28 partner to be honest i haven't seen it a single time
23:29 JoeJulian Why you would assign storage to an admin that you don't trust to install packages is beyond me.
23:29 partner i bet that company will never go into devops :)
23:29 JoeJulian To be fair, it was a certified system.
23:29 JoeJulian Finance or something like that.
23:31 partner one can do nasty things with packages so i can buy that
23:32 partner i don't know how RH handles certain conflicts with upgrading packages with configs but debian surely just asks if you want to overwrite, diff, get a root shell to investigate
23:34 JoeJulian Certain files within an rpm are marked as config files. They will be left alone and the new one will be installed with a .rpmnew extension.
23:34 partner debs do the same
23:34 JoeJulian RH based distros are also smart enough not to just start an unconfigured service.
23:35 glusterbot News from newglusterbugs: [Bug 1184658] Debian client package not depending on attr <https://bugzilla.redhat.com/show_bug.cgi?id=1184658>
23:35 dbruhn Anyone know how to reset roc in cent7
23:35 dbruhn rpc
23:35 JoeJulian "apt-get install nginx" ... hey, why is port 80 listening?
23:35 calisto joined #gluster
23:36 JoeJulian like rpcbind?
23:36 dbruhn yeah
23:36 JoeJulian systemctl restart rpcbind.service
23:36 dbruhn seems the default nfs server registered and rpcbind won't let it go
23:36 partner we've put policy-rc.d into place to prevent that kind of things from happening. then when the config management hits in and does changes it will kick the procs up
23:37 JoeJulian oh, wait...
23:37 JoeJulian it's a socket not a service...
23:38 JoeJulian No, ok... there's a service too.
23:38 JoeJulian So I was right the first time.
23:39 dbruhn The problem I am having is it seems that on centos7 rpcbind is using -w for a warm start so it's using it configuration files instead of making application reregister, which is blocking gluster from grabbing the nfs ports. Or that what I think is going on
23:42 JoeJulian Well that's f**ing retarded. That should be in /etc/sysconfig/rpcbind not /usr/lib/systemd/system/rpcbind.service
23:43 JoeJulian sed 's/rpcbind -w/rpcbind/' < /usr/lib/systemd/system/rpcbind.service >/etc/systemd/system/rpcbind.service
23:44 dbruhn ok, so just taking the -w out should do what I need.
23:44 dbruhn wasn't 100% sure on that
23:45 JoeJulian I'm not 100% on that either, just on where it should be configured.
23:46 PeterA how to trigger the local mount?
23:46 PeterA localhost:sas01                            34T  4.6T   29T  14% /run/gluster/sas01
23:46 PeterA localhost:sas02                            17T  2.2T   15T  14% /run/gluster/sas02
23:46 PeterA on the gluster node?
23:49 JoeJulian dbruhn: bug 1184661
23:49 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1184661 unspecified, unspecified, rc, steved, NEW , systemd service contains -w switch
23:50 dbruhn *thumbs up*
23:50 JoeJulian Trigger the local mount? What's that mean?
23:50 JoeJulian mount -a
23:53 PeterA nvm
23:53 PeterA supid me
23:53 PeterA mount -t glusterfs glusterprod004.bo.shopzilla.sea:/sas04 /gfs/sas04
23:53 PeterA just tried to mount gfs locally on the node
23:54 wkf joined #gluster
23:57 partner christ the info logging is huge
23:59 partner added with broken logrotate that calls for trouble

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary