Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 cyberswat joined #gluster
00:05 curratore JoeJulian: I have a “problem” with replica I saw your post on your blog, could you advice me about it?
00:08 curratore Could use rsync to replicate faster the info to a new brick replicated added after the initial creation of volume? or this is not working at all?
00:09 kudude joined #gluster
00:09 kudude anyone have any tuning tips for a gluster stripe volume hosting large files?
00:16 JoeJulian curratore: No, you would either not have any extended attributes, or you would have a mix of good extended attributes and bad ones.
00:17 JoeJulian kudude: tuning, by its very definition, is use case specific.
00:43 theron joined #gluster
00:49 Pupeno joined #gluster
00:57 harish joined #gluster
01:02 topshare joined #gluster
01:07 gildub joined #gluster
01:10 topshare joined #gluster
01:18 nangthang joined #gluster
01:23 cyberswat joined #gluster
01:28 hagarth joined #gluster
01:53 DV joined #gluster
01:56 theron joined #gluster
02:08 PatNarcisoZzZ joined #gluster
02:16 haomaiwang joined #gluster
02:39 nangthang joined #gluster
02:51 aaronott joined #gluster
02:51 haomaiwang joined #gluster
03:05 kdhananjay joined #gluster
03:12 itisravi joined #gluster
03:14 cholcombe joined #gluster
03:17 meghanam joined #gluster
03:17 maveric_amitc_ joined #gluster
03:20 lpabon joined #gluster
03:20 Pupeno_ joined #gluster
03:22 rejy joined #gluster
03:27 TheSeven joined #gluster
03:41 yosafbridge joined #gluster
03:42 shubhendu joined #gluster
03:44 sakshi joined #gluster
03:52 RameshN_ joined #gluster
03:55 Manikandan joined #gluster
03:57 raghug joined #gluster
04:06 vimal joined #gluster
04:16 kanagaraj joined #gluster
04:21 ppai joined #gluster
04:24 nbalacha joined #gluster
04:29 yazhini joined #gluster
04:30 prg3 joined #gluster
04:37 Lee1092 joined #gluster
04:39 haomaiwa_ joined #gluster
04:48 ramteid joined #gluster
04:51 kovshenin joined #gluster
04:52 ndarshan joined #gluster
04:54 anil joined #gluster
05:00 prg3 joined #gluster
05:01 smohan joined #gluster
05:03 haomaiw__ joined #gluster
05:04 Manikandan joined #gluster
05:04 meghanam joined #gluster
05:05 soumya_ joined #gluster
05:13 kovshenin joined #gluster
05:13 arcolife joined #gluster
05:18 harish joined #gluster
05:21 deepakcs joined #gluster
05:22 dusmant joined #gluster
05:22 Manikandan joined #gluster
05:24 pppp joined #gluster
05:26 gem joined #gluster
05:27 ashiq joined #gluster
05:34 kshlm joined #gluster
05:35 hgowtham joined #gluster
05:35 PatNarcisoZzZ joined #gluster
05:38 spandit joined #gluster
05:39 raghu joined #gluster
05:41 hagarth joined #gluster
05:48 kdhananjay joined #gluster
05:52 overclk joined #gluster
05:56 maveric_amitc_ joined #gluster
05:56 glusterbot News from resolvedglusterbugs: [Bug 1230167] [Snapshot] Python crashes with trace back notification when shared storage is unmount from Storage Node <https://bugzilla.redhat.co​m/show_bug.cgi?id=1230167>
05:56 glusterbot News from resolvedglusterbugs: [Bug 1232002] nfs-ganesha: 8 node pcs cluster setup fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232002>
05:57 glusterbot News from resolvedglusterbugs: [Bug 1232143] nfs-ganesha: trying to bring up nfs-ganesha on three node shows error although pcs status and ganesha process on all three nodes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232143>
05:57 glusterbot News from resolvedglusterbugs: [Bug 1233117] quota: quota list displays double the size of previous value, post heal completion. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233117>
05:57 Bhaskarakiran joined #gluster
06:01 PatNarcisoZzZ joined #gluster
06:07 haomaiwang joined #gluster
06:10 jiffin joined #gluster
06:13 maveric_amitc_ joined #gluster
06:19 jtux joined #gluster
06:24 atalur joined #gluster
06:34 vmallika joined #gluster
06:44 kshlm joined #gluster
06:49 elico joined #gluster
06:53 jbrooks joined #gluster
06:59 jiffin1 joined #gluster
07:01 smohan joined #gluster
07:02 nangthang joined #gluster
07:03 nangthang joined #gluster
07:13 Slashman joined #gluster
07:15 ppai joined #gluster
07:17 jbrooks joined #gluster
07:20 rafi joined #gluster
07:26 Trefex joined #gluster
07:28 haomaiwang joined #gluster
07:33 jcastill1 joined #gluster
07:34 LebedevRI joined #gluster
07:35 Trefex joined #gluster
07:38 jcastillo joined #gluster
07:39 ppai joined #gluster
07:46 vmallika joined #gluster
07:48 topshare joined #gluster
07:48 [Enrico] joined #gluster
07:52 Norky joined #gluster
07:54 topshare joined #gluster
07:55 ctria joined #gluster
07:57 curratore joined #gluster
07:57 curratore JoenJulian: thx for the aclaration
07:57 curratore :)
08:03 ndarshan joined #gluster
08:11 tanuck joined #gluster
08:12 jiffin joined #gluster
08:13 ajames-41678 joined #gluster
08:14 Trefex joined #gluster
08:21 swebb joined #gluster
08:31 sankarshan_ joined #gluster
08:37 kshlm joined #gluster
08:53 gildub joined #gluster
08:57 glusterbot News from newglusterbugs: [Bug 1241841] gf_msg_callingfn does not log the callers of the function in which it is called <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241841>
08:59 raghug joined #gluster
09:13 kdhananjay joined #gluster
09:14 atalur joined #gluster
09:22 ppai joined #gluster
09:28 mahendra joined #gluster
09:28 kotreshhr joined #gluster
09:33 mahendra_ joined #gluster
09:33 kdhananjay1 joined #gluster
09:38 mahendra joined #gluster
09:39 gem joined #gluster
09:40 LebedevRI joined #gluster
09:43 mahendra_ joined #gluster
09:48 mahendra joined #gluster
09:53 mahendra_ joined #gluster
09:53 jordielau joined #gluster
09:58 mahendra joined #gluster
10:00 kovshenin joined #gluster
10:03 mahendra_ joined #gluster
10:06 Norky joined #gluster
10:08 mahendra joined #gluster
10:13 mahendra_ joined #gluster
10:13 Vortac joined #gluster
10:14 backer_ joined #gluster
10:18 mahendra joined #gluster
10:23 mahendra_ joined #gluster
10:26 harish joined #gluster
10:27 smohan_ joined #gluster
10:27 glusterbot News from newglusterbugs: [Bug 1241885] ganesha volume export fails in rhel7.1 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241885>
10:28 mahendra joined #gluster
10:29 budimir joined #gluster
10:32 anrao joined #gluster
10:33 mahendra_ joined #gluster
10:33 von joined #gluster
10:34 nangthang joined #gluster
10:35 von hi.I'm running glusterfs 3.6.1 in distributed mode, and having an issue with stale NFS file handle
10:36 von google barely helps, most of the info is on actual NFS issues, not related to gluster, and what I have managed to find is either outdated or related to replicated mode
10:36 von the volume is mounted via fuse
10:38 mahendra joined #gluster
10:39 soumya joined #gluster
10:43 ndevos von: a "stale file handle" basically tells you that the file was changed (most often removed) from the storage by a different process
10:43 kotreshhr1 joined #gluster
10:43 mahendra_ joined #gluster
10:44 von yeah, I've found out that much, ndevos. I'm more interested in a way to deal with it: I don't suppose removing the file directly from brick (the file still exists there btw) would make things better, right?
10:45 ndevos von: no, if you modify files on the bricks directly, things will get in an awkward state, that'll surely not help
10:47 ndevos von: dealing with it is mainly preventing processes from deleting files that other processes still want to use
10:47 nbalacha joined #gluster
10:48 von indeed, but I still need that stale file removed from the volume
10:48 soumya joined #gluster
10:49 mahendra joined #gluster
10:53 dusmant joined #gluster
10:54 backer_ How to replace a failed brick on same mount point in distributed disperse volume? Any one can help ?
10:54 plarsen joined #gluster
10:54 ndevos von: that stale file should not exist anymore, that is why it is stale... it most likely is cached on the client-side
10:54 mahendra_ joined #gluster
10:55 von ndevos, is restarting related services and rebooting the whole client supposed to fix it?
10:56 von because I've done both to no avail
10:56 kotreshhr joined #gluster
10:57 von not to mention that all three mounts on different servers results with the same issue (only one of them has permission to write there), two of which have been rebooted.
10:59 ndevos von: yes, stopping/starting the application and/or unmount/mount on the client should resolve that
10:59 ndevos von: any idea how you got into this situation? it really sounds a little awkward
10:59 mahendra joined #gluster
11:01 von I run a pnp4nagios on one of the mounts (the one that is allowed to write files on the volume), maybe two of the pnp workers tried to write that file at the same time
11:05 backer_ How to replace a failed brick on same mount point in distributed disperse volume? Any one can help ?
11:05 gem joined #gluster
11:08 mahendra_ joined #gluster
11:08 budimir joined #gluster
11:09 RedW joined #gluster
11:09 budimir hi guys, i need add a lot of files to replicated volume, i was thinking about moving data directly to brick dir on each server, but looks like this is not good idea? anybody know fast way to perform this?
11:13 pppp joined #gluster
11:14 ndevos von: hmm, maybe you have a directory ,,(split brain), directories are on all bricks, maybe the directory has different GFIDs on different bricks?
11:14 glusterbot von: To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/
11:15 mahendra joined #gluster
11:19 von ndevos, how do I find that out?
11:20 von also, I thought split-brain is the issue that happens to replicated storage, not distributed
11:20 mahendra_ joined #gluster
11:20 ndevos von: well, directories are replicated in a distributed environment
11:20 von heal wouldn't work on distributed volume
11:21 von also I can access other files in that directory, only one has a stale file handle
11:21 ndevos von: you can check on the bricks, see of the gfid xattr is the same: getfattr -m. -ehex -d /bricks/path/to/dir
11:22 ndevos von: well, maybe it is a file that does exist on multiple bricks?
11:23 von ndevos, it does, it exists on the brick where it is supposed to exist, it's placeholder exists on the second brick
11:23 von it's perfectly readable and writable directly on the file system
11:24 ndevos von: what is "it's placeholder" for you?
11:25 von ndevos, an empty file with ---------T attributes
11:25 glusterbot von: -------'s karma is now -5
11:25 von wat
11:25 mahendra joined #gluster
11:26 msvbhat budimir: You *shouldn't* copy the files directly to brick dir on the server
11:26 msvbhat budimir: It has to be copied from the client
11:27 ndevos von: oh, ok, so that is a "link file", suggesting the actual file is on an other brick as what is expected
11:28 glusterbot News from newglusterbugs: [Bug 1241895] nfs-ganesha: add-node logic does not copy the "/etc/ganesha/exports" directory to the correct path on the newly added node <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241895>
11:28 ndevos von: I do not really know how those are handled and cached and all... did you check the xattrs and compared the gfid?
11:31 von ndevos, gfids on two bricks for the failed file do not match
11:34 ndevos von: right, so I guess you should delete the 'link file' and its <brick-dir>/.glusterfs/??/??/<gfid>
11:35 ndevos von: when doing that, I would stop the glusterfsd process for that brick, and start it afterwards
11:35 sage joined #gluster
11:37 mahendra_ joined #gluster
11:39 von ndevos, both for the original file and the link file?
11:39 von as well as the original file and the link themselves?
11:39 ndevos von: I'd try with only the link file first
11:39 von okay, thanks, I'll try that
11:42 von thank you very much, ndevos
11:42 von that fixed it
11:42 ira joined #gluster
11:43 ndevos von: cool! but still a little strange how this could happen, but I'm no expert on the distribute part
11:43 ira joined #gluster
11:43 cyberswat joined #gluster
11:44 kotreshhr joined #gluster
11:45 ira joined #gluster
11:47 mahendra joined #gluster
11:51 itisravi ndevos: wondering if the split-brain resolution link which glusterbot displays should also be pointing to https://github.com/gluster/gluster​fs/blob/master/doc/features/heal-i​nfo-and-split-brain-resolution.md
11:52 mahendra_ joined #gluster
11:53 itisravi The md file is more recent and explains the various options that are availabe via the gluster CLI or from the mount point.
11:53 ndevos itisravi: you can instruct glusterbot to "@forget split-brain" and build a replacement with "@learn split-brain as ....."
11:53 von ndevos, might be because brick2 that stores link files was not accessible the moment the original file on brick1 changed
11:53 von *wsaa
11:54 von *was :(
11:54 mahendra_ I'm evaluating gluster for a workload of concurrent read/write where reader is in a different node. trying to get to lowest possible latency.
11:54 itisravi ndevos: okay, will check if JoeJulian is okay with it too.
11:54 ndevos itisravi: I'm pretty sure he wont have an issue with it, just keep the split-mount link in the message too :)
11:55 itisravi ndevos: alright :)
11:55 ndevos itisravi: and if he doesnt like it, he can change it back again!
11:55 itisravi ndevos: haha okay.
11:55 unclemarc joined #gluster
11:56 von the thing is, heal command is not available for distributed mode, and I don't see any other way to fix it besides manually removing faulty links =/
11:56 ndevos von: yes, that would be a situation where that can happen... I think, not sure how it could get a different gfid, thats out of my area
11:56 itisravi glusterbot: @forget split-brain
11:57 ndevos itisravi: without the "glusterbot: " prefix, I think
11:57 * itisravi tries again
11:57 ndevos or without the @?
11:57 itisravi glusterbot: forget split-brain
11:57 glusterbot itisravi: Error: 2 factoids have that key.  Please specify which one to remove, or use * to designate all of them.
11:57 * itisravi oh?
11:58 * itisravi how do i list both 'factoids'?
11:58 itisravi glusterbot:  split-brain
11:58 glusterbot itisravi: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
11:58 itisravi ah!
11:59 itisravi glusterbot:  learn split-brain as
11:59 glusterbot itisravi: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
11:59 * itisravi hmm
12:00 PatNarcisoZzZ joined #gluster
12:00 Asmadeus Hi. I'm mounting a cluster volume on one of the servers itself, is there a way to tell it to access it from localhost instead of going through the network? Looking at gluster volume top I'm pretty sure it's going to another node
12:00 Asmadeus (more generally, can you e.g. tell clients to failover in prevision of shutting down a server?)
12:01 unclemarc joined #gluster
12:01 Trefex joined #gluster
12:03 mahendra joined #gluster
12:03 itisravi glusterbot: learn split-brain as "To heal split-brains, see https://github.com/gluster/gluster​fs/blob/master/doc/features/heal-i​nfo-and-split-brain-resolution.md . Also see splitmount https://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/ . For additonal information, see this older article https://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/"
12:03 glusterbot itisravi: The operation succeeded.
12:03 itisravi glusterbot: split-brain
12:03 glusterbot itisravi: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/, or (#3) To heal split-brains, see https://github.com/gluster/gluster​fs/blob/master/doc/features/heal-i​nfo-and-split-brain-resolution.md . Also see splitmount
12:03 glusterbot itisravi: https://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/ . For additonal information, see this older article https://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
12:04 * itisravi aargh!
12:04 pppp joined #gluster
12:04 ndevos lol!
12:04 itisravi glusterbot: forget split-brain *
12:04 glusterbot itisravi: The operation succeeded.
12:04 itisravi split-brain
12:04 mahendra quit
12:04 mahendra exit
12:04 itisravi glusterbot: learn split-brain as "To heal split-brains, see https://github.com/gluster/gluster​fs/blob/master/doc/features/heal-i​nfo-and-split-brain-resolution.md . Also see splitmount https://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/ . For additonal information, see this older article https://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/"
12:04 glusterbot itisravi: The operation succeeded.
12:05 itisravi split-brain
12:05 ndevos ~split-brain | itisravi
12:05 glusterbot itisravi: To heal split-brains, see https://github.com/gluster/gluster​fs/blob/master/doc/features/heal-i​nfo-and-split-brain-resolution.md . Also see splitmount https://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/ . For additonal information, see this older article https://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
12:05 squaly joined #gluster
12:05 itisravi ndevos: finally :)
12:06 itisravi ndevos++ Thanks for the commands
12:06 glusterbot itisravi: ndevos's karma is now 20
12:07 itisravi glusterbot: help
12:07 glusterbot itisravi: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.
12:07 ndevos Asmadeus: Gluster is a distributed filesystem, clients (mountpoints) do the logic of connecting to the right bricks that contain the data and respond fastest
12:07 ndevos itisravi++ nicely done!
12:07 glusterbot ndevos: itisravi's karma is now 2
12:07 Asmadeus ndevos: I only have two bricks with two copies, so all the bricks should have all the data?
12:07 itisravi glusterbot:  list
12:07 glusterbot itisravi: Admin, Alias, Anonymous, Bugzilla, Channel, ChannelStats, Conditional, Config, Dict, Factoids, Google, Herald, Karma, Later, MessageParser, Misc, Network, NickCapture, Note, Owner, Plugin, PluginDownloader, RSS, Reply, Seen, Services, String, Topic, Trigger, URL, User, Utilities, and Web
12:07 Asmadeus If so, I can't see any reason not to go for localhost :)
12:08 Asmadeus (I'm asking because I've got VM volumes on it, and it's very slow since I restarted everything last week..)
12:09 ndevos Asmadeus: is that data read-only, and does it never ever get modified?
12:09 Asmadeus It does get modified, it's a live system
12:10 ndevos Asmadeus: uh, I must be missing something then... how can replicas stay in sync by updating only the VM volume on localhost?
12:11 ndevos Asmadeus: each write would get done to both copies, but maybe reads are done only from localhost? that is what I would expect to happen
12:11 Asmadeus Hm. I must be missing something then :) I thought the client only talked to either server, and the servers did the sync in the background
12:12 ndevos Asmadeus: no, the clients contain the logic, a client will write to both copies
12:12 Asmadeus Ok, that helps understanding
12:12 mahendra joined #gluster
12:12 ndevos Asmadeus: yeah, I bet :D
12:13 Asmadeus I was looking at 'volume top <volume> read' until then, but actually write is the same -- looking at a particular file, why is it only listed on one brick?
12:14 soumya joined #gluster
12:14 Asmadeus hmm actually stat() on both bricks looks like it does the right thing
12:15 Asmadeus (e.g. access date is correct on localhost, modify is correct on both)
12:15 Asmadeus So I guess I need to look for something else to blame for slownes
12:15 itisravi Asmadeus: If it helps, gluster 'clients' talk to a gluster 'volume'. Each volume is an aggregation of bricks configured in a particular fashion. (Distribute, replicate, distributed-replicate etc.)
12:16 jtux joined #gluster
12:16 jmarley joined #gluster
12:16 Asmadeus Yeah, that's all good. I'm just confused by gluster volume top I guess
12:17 itisravi Is your volume a replicated one?
12:18 Asmadeus Yeah, two bricks replicated
12:19 Asmadeus Looking at the raw data in the bricks I get exactly what I wanted (and what ndevos said he expected)
12:20 itisravi In replicated volumes, If a client is mounted on one of the brick servers, reads should be served from that. (Assuming both copies are in clean and in sync).
12:21 Asmadeus Yeah. It's just not what gluster volume top <volume> <read|write|open|whatever> say ;)
12:21 Asmadeus But atime on the files do agree it's what's being done, so it's all good
12:22 itisravi cool.
12:28 coredump joined #gluster
12:29 julim joined #gluster
12:30 rwheeler joined #gluster
12:32 cyberswat joined #gluster
12:33 vmallika joined #gluster
12:36 nsoffer joined #gluster
12:37 ninkotech joined #gluster
12:37 ninkotech_ joined #gluster
12:46 20WABNM8Y joined #gluster
12:46 92AABMN72 joined #gluster
12:48 B21956 joined #gluster
12:57 ajames-41678 joined #gluster
12:58 jrm16020 joined #gluster
13:00 ira joined #gluster
13:10 neofob joined #gluster
13:10 shaunm_ joined #gluster
13:10 haomaiwang joined #gluster
13:11 haomaiwa_ joined #gluster
13:14 von notgtav is available on steam again
13:14 von they're renaming it to notdmcav
13:14 aaronott joined #gluster
13:14 von durr
13:14 von sorry, wrong chat
13:14 von left #gluster
13:18 lpabon joined #gluster
13:22 harish joined #gluster
13:27 plarsen joined #gluster
13:29 georgeh-LT2 joined #gluster
13:30 dgandhi joined #gluster
13:34 hamiller joined #gluster
13:37 jcastill1 joined #gluster
13:43 jcastillo joined #gluster
13:44 aaronott joined #gluster
13:50 hagarth joined #gluster
13:50 shyam joined #gluster
13:52 theron joined #gluster
13:59 ekuric joined #gluster
14:00 pppp joined #gluster
14:08 Trefex joined #gluster
14:12 kbyrne joined #gluster
14:20 cyberswat joined #gluster
14:21 DV joined #gluster
14:23 nangthang joined #gluster
14:24 cvstealth joined #gluster
14:28 rafi1 joined #gluster
14:30 victori joined #gluster
14:30 victori_ joined #gluster
14:39 rafi joined #gluster
14:46 mckaymatt joined #gluster
14:48 cyberswat joined #gluster
14:48 cyberswat joined #gluster
14:57 PatNarcisoZzZ joined #gluster
14:57 mpietersen joined #gluster
14:59 glusterbot News from newglusterbugs: [Bug 1241985] [RFE] drop the need to peer probe the hosts in a cluster from all the hosts <https://bugzilla.redhat.co​m/show_bug.cgi?id=1241985>
14:59 mckaymatt joined #gluster
15:19 mpietersen joined #gluster
15:21 nbalacha joined #gluster
15:21 shyam joined #gluster
15:22 kbyrne joined #gluster
15:24 jcastill1 joined #gluster
15:25 hagarth joined #gluster
15:25 martinliu joined #gluster
15:28 jbrooks joined #gluster
15:30 jcastillo joined #gluster
15:30 martinliu joined #gluster
15:33 shyam joined #gluster
15:44 cholcombe joined #gluster
15:58 cornfed78 joined #gluster
15:59 cornfed78 hi all, just wondering if anyone else has ran into this: Trying to install NFS Ganesha w/ Gluster 3.7.2 on EL6.
16:00 cornfed78 Yum can't seem to find nfs-ganesha-gluster in the EPEL repo for 6... it's there for 7, though..
16:00 cornfed78 anyone run into this? better yet, solve it?
16:00 cornfed78 i tried hunting down an alternate source for the RPM, but I'm coming up empty
16:04 JoeJulian itisravi++
16:04 glusterbot JoeJulian: itisravi's karma is now 3
16:10 jdossey joined #gluster
16:11 cholcombe joined #gluster
16:14 ChrisHolcombe joined #gluster
16:33 rotbeard joined #gluster
16:34 kkeithley cornfed78: http://download.gluster.org/pub/gl​uster/glusterfs/nfs-ganesha/2.2.0/
16:34 RameshN_ joined #gluster
16:35 cornfed78 kkeithley: thanks! I don't see them there, either, though:
16:35 cornfed78 http://download.gluster.org/pub/gluster/gluste​rfs/nfs-ganesha/2.2.0/EPEL.repo/epel-6/x86_64/
16:36 cornfed78 it's specifically the nfs-ganesha-gluster rpm that seems to have gone missing
16:36 kkeithley hmmm, should be. Hang on
16:37 cornfed78 if you look in the el7 repo, it's there:
16:37 cornfed78 http://download.gluster.org/pub/gluster/gluste​rfs/nfs-ganesha/2.2.0/EPEL.repo/epel-7/x86_64/
16:37 kkeithley yup. I built them
16:38 cornfed78 brb
16:50 topshare joined #gluster
16:51 topshare joined #gluster
16:57 cholcombe joined #gluster
16:57 topshare joined #gluster
17:02 topshare joined #gluster
17:06 B21956 joined #gluster
17:16 Rapture joined #gluster
17:17 cornfed78 kkeithley: I saw the RPM pop up.. I pulled it down and I was able to move forward with the install.. Thanks!
17:25 firemanxbr joined #gluster
17:40 jiffin joined #gluster
17:42 mckaymatt joined #gluster
17:59 klaas joined #gluster
17:59 glusterbot News from newglusterbugs: [Bug 1218961] snapshot: Can not activate the name provided while creating snaps to do any further access <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218961>
18:05 gothos left #gluster
18:16 firemanxbr joined #gluster
18:25 magamo Hey folks.  Is there any way to drop a duplicated peer from the pool?
18:41 jiffin magamo: what did u mean by duplicated peer?
18:42 jiffin magamo: u can detach a peer using  peer detach <ip>
18:51 magamo jiffin: I have the same peer in my cluster 'twice' with different UUIDs.
18:52 magamo It happened after our root drives filled up with brick logs due to the brick log bug in 3.7.2
18:52 magamo And having the node that's duplicated online prevents me from being able to do volume status.
18:52 magamo But if I shut down glusterd on the affected node, I can do so.
18:57 jiffin magamo: can u give me the output of gluster peer status
18:57 magamo I sure can, in a few moments -- Dealing with a completely different issue real quick.
18:58 cyberswat joined #gluster
18:58 shredder12 joined #gluster
18:58 shredder12 joined #gluster
18:58 shredder12 joined #gluster
19:00 shredder12 Hi everyone. The gluster provided NFS server(listening on TCP 2049) isn't running on the server. Is there a way to start this service without restarting gluster-server?
19:03 magamo jiffin: http://pastebin.com/XSZpX0hj -- From one of the other hosts in the cluster.
19:03 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
19:05 magamo You can see that there's two 'dpnjsv2-fsclu01-02's listed, with different UUIDs.
19:06 magamo I've tried shutting down each node in the cluster in turn, and removing the erroneous file from /var/lib/glusterfs/peers, and starting it back up, but it replicates the bad info from the rest of the cluster, and this is a production cluster, so I can't just shut the whole thing down to try it all at once.
19:07 magamo And it won't let me just kick that node from the cluster because there's bricks on it in volumes.
19:12 jiffin magamo: i don't know how you got endup in this situation.
19:13 jiffin magamo: only option may help restarting glusterd in dpnjsv2-fsclu01-02
19:13 jiffin and i am not sure whether that will work or not
19:14 JoeJulian magamo: Stop glusterd on your servers, remove the errant uuid file from /var/lib/glusterd/peers on all servers, start glusterd again.
19:15 JoeJulian Also... make /var/log its own partition so that root doesn't fill up. I can't believe that's not the default for every distro.
19:20 magamo JoeJulian: There's a bug in 3.7.2 that's causing the log issue -- We can work around that though.  I'll try and get a maintenence window to fix glusterd.
19:20 JoeJulian bug or not, if root gets full for even valid logs, you could be up a tributary without the appropriate method of propulsion.
19:21 JoeJulian magamo: you can stop glusterd without affecting usage.
19:22 magamo Hmmm.  If I shut down glusterd on the entire cluster to clear that out, that wouldn't affect usage from the client perspective?
19:22 JoeJulian It would prevent new mounts during that window, that's all.
19:23 magamo Oh, cool.  Let me just try that on our test cluster.
19:29 magamo Thanks, Joe.  I can see that does work.  Let me get authorization to do this on the prod cluster.
19:29 magamo :-)
19:29 JoeJulian You're welcome. :)
19:40 shredder12 Hey folks. any thoughts on how to start the gluster NFS server without restarting the gluster-server service?
19:46 JoeJulian Why not just restart it?
19:47 JoeJulian Like we just discussed above, restarting glusterd does not affect volume operations.
19:47 JoeJulian Otherwise, try "gluster volume start $vol force". It might work, maybe.
19:50 bennyturns joined #gluster
19:53 joseki joined #gluster
19:54 joseki my logs are flooding with an error about null gfid and then a message about failed to get size and contribution of path
19:54 joseki i just enabled quota. any harm in disabling quota now?
19:56 joseki ... or just wait?
19:57 Rapture joined #gluster
19:57 smohan|afk joined #gluster
20:11 DV joined #gluster
20:33 neofob left #gluster
20:39 victori joined #gluster
20:43 nage joined #gluster
20:51 coredump joined #gluster
20:52 coredump joined #gluster
20:58 rwheeler joined #gluster
21:00 shaunm_ joined #gluster
21:23 stickyboy joined #gluster
21:41 rndm left #gluster
21:55 ctria joined #gluster
22:02 Rapture joined #gluster
22:03 CyrilPeponnet joined #gluster
22:06 Pupeno joined #gluster
22:08 mckaymatt joined #gluster
22:19 ctria joined #gluster
22:19 stickyboy joined #gluster
22:20 uebera|| joined #gluster
22:34 Pupeno joined #gluster
22:34 Pupeno joined #gluster
22:45 badone joined #gluster
22:52 arthurh joined #gluster
22:53 wushudoin| joined #gluster
22:55 RobertLaptop joined #gluster
22:57 codex joined #gluster
22:59 victori joined #gluster
23:00 ndevos joined #gluster
23:00 ndevos joined #gluster
23:01 cholcombe joined #gluster
23:20 Pupeno joined #gluster
23:21 Pupeno joined #gluster
23:21 Pupeno joined #gluster
23:38 Pupeno joined #gluster
23:58 ira joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary