Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 Wizek joined #gluster
02:03 mmckeen joined #gluster
02:11 om2 joined #gluster
02:38 julim joined #gluster
03:11 Wizek joined #gluster
03:25 Alghost joined #gluster
03:29 Wizek joined #gluster
03:37 julim joined #gluster
04:30 atinm joined #gluster
06:44 Muthu joined #gluster
06:44 Muthu_ joined #gluster
06:46 deniszh joined #gluster
06:54 suliba joined #gluster
06:58 DV joined #gluster
07:47 Saravanakmr joined #gluster
07:51 Muthu joined #gluster
07:51 Muthu_ joined #gluster
08:00 atinm joined #gluster
08:14 Wizek joined #gluster
08:39 Wizek joined #gluster
10:04 jri joined #gluster
10:36 AppStore joined #gluster
10:37 davidj joined #gluster
10:37 PotatoGim joined #gluster
10:38 fyxim joined #gluster
10:38 sc0 joined #gluster
10:38 lh_ joined #gluster
11:04 ejsf joined #gluster
11:09 Gnomethrower joined #gluster
11:40 jwd joined #gluster
11:45 bluenemo joined #gluster
12:40 Gnomethrower joined #gluster
12:46 hi11111 joined #gluster
13:04 plarsen joined #gluster
13:27 jiffin joined #gluster
13:29 jiffin1 joined #gluster
13:29 poornimag joined #gluster
13:33 ahino joined #gluster
13:33 jiffin1 joined #gluster
13:39 jiffin joined #gluster
13:43 wadeholler joined #gluster
14:05 jiffin1 joined #gluster
14:08 jwd joined #gluster
14:10 jiffin joined #gluster
14:12 jiffin1 joined #gluster
15:03 chirino_m joined #gluster
15:09 Lee1092 joined #gluster
15:17 johnmilton joined #gluster
15:20 julim joined #gluster
15:21 ghenry joined #gluster
15:21 ghenry joined #gluster
15:27 DV joined #gluster
15:33 nishanth joined #gluster
15:36 shubhendu joined #gluster
15:39 shubhendu joined #gluster
15:44 kramdoss_ joined #gluster
16:19 Dasiel joined #gluster
16:51 Saravanakmr joined #gluster
17:20 ahino joined #gluster
18:18 Wizek joined #gluster
18:25 Wizek joined #gluster
18:47 hchiramm joined #gluster
19:14 dpaz joined #gluster
19:15 dpaz hi guys , I'm having some issue with a replica 3 gluster setup , I get  volume start: engine: failed: Quorum not met. Volume operation not allowed. when trying to start the volume . I'm pretty new to gluster , I'd appreciate help restoring the vol
19:16 Jacob843 joined #gluster
19:19 Jacob843 Heya. I am running GlusterFS and I'm having some trouble. I have a volume called /games, which is comprised of two replicated bricks. I mounted the volume on a remote host, and put a BYOND (game engine) executable on it (.dmb). However, BYOND can't run the executable if it is on the mounted drive. It works, for example, if I copy it to /home/game/, so I know the file isn't corrupted or anything. I set the file to 777 and marked myself as the owner,
19:19 Jacob843 thinking maybe it was a permission issue, but that isn't the case. Is there something else I can look for? I am kind of new to GlusterFS, so if I missed something really obvious, I'm sorry :)
19:26 dp hi guys , I'm having some issue with a replica 3 gluster setup , I get  volume start: engine: failed: Quorum not met. Volume operation not allowed. when trying to start the volume . I'm pretty new to gluster , I'd appreciate help restoring the vol
19:29 Jacob843 Re: it appears I can run the executable from the server. Is there a global client permission setting thingy?
19:35 jwd joined #gluster
20:03 post-factum Jacob843: noexec?
20:04 Jacob843 post-factum, I just tried to mount as nfs, then ran "$ mount -o remount,exec /games/" and no dice
20:04 Jacob843 I am trying to see if maybe a symbolic link will work. Honestly I am a bit out of ideas at this point...
20:05 Jacob843 Annnd no dice with the sym link. My idea was maybe it didn't like calling /games directly for some reason, but I guess not
20:07 wadeholler joined #gluster
20:10 post-factum Jacob843: https://bugzilla.redhat.com/show_bug.cgi?id=1162910
20:10 glusterbot Bug 1162910: medium, unspecified, ---, bugs, NEW , mount options no longer valid: noexec, nosuid, noatime
20:10 post-factum Jacob843: if you make some simple script, place it on gluster and chmod a+x it, does it run?
20:12 Jacob843 post-factum, that ran, it seems. I made a bash script that echo's "HI!", set a+x, and it ran and echo'd "HI!"
20:12 Jacob843 Hmm
20:13 Jacob843 This must be something with BYOND freaking out over something
20:13 post-factum try to dig into with strace, for example
20:14 Jacob843 Is there anyway to maybe make a local copy of a folder, easily?
20:14 y4m4 joined #gluster
20:14 Jacob843 Since I know it runs on the server
20:15 y4m4 left #gluster
20:15 post-factum ?
20:15 post-factum rsync?
20:16 Jacob843 I could create /games2/ and I guess rsync it with /games/
20:16 Jacob843 I guess that could work
20:20 Jacob843 Hang tight a sec
20:27 cloph you could try with strace to examine difference between gluster mount and non-gluster...
20:33 Jacob843 Does this mean anything to anyone: http://pastebin.com/6FGN1FDV ? That is on the failing server. It doesn't seem like much of a help
20:33 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:33 Jacob843 Fair enough, bot: https://paste.fedoraproject.org/387698/78035146/
20:34 glusterbot Title: #387698 Fedora Project Pastebin (at paste.fedoraproject.org)
20:38 cloph maybe incompatible with 64bit inodes
20:41 Jacob843 With gcc-multilib, DreamDaemon runs, and if I copy omegamall.dmb to another dir outside of the share, it works. I also realized that keeping a local copy isn't feasible, so I guess I got to figure this out :(
20:43 Jacob843 cloph, in your opinion, do you think this is more of a BYOND issue? I can reach out to them and see if they have any thoughts
20:44 cloph just try with enable_ino32 mount option and see if that works. If it does, then yes, not gluster related.
20:46 Jacob843 On the server side?
20:46 Jacob843 Do I need to remount or anything?
20:48 cloph https://joejulian.name/blog/broken-32bit-apps-on-glusterfs/
20:48 glusterbot Title: Broken 32bit apps on GlusterFS (at joejulian.name)
20:50 Jacob843 cloph, was that supposed to wipe that volume? lol
20:51 cloph no.
20:51 Jacob843 Huh
20:51 Jacob843 Good thing nothing important was on it
20:52 Jacob843 OH!
20:52 Jacob843 cloph, that worked!
20:53 Jacob843 Thanks for the help :)
20:53 cloph fuse client has the same  mount option btw - in any case, using it should not wipe that  volume...
20:55 post-factum nice bug or emmmm feature
20:57 Jacob843 Haha, it is always a feature
21:00 Jacob843 Regardless, now what is the downside of connecting nfs rather than the gluster mount?
21:03 post-factum lack of high availability
21:03 cloph see above - I linked to the article because it offers a description of the problem. But it is four years old. the option is also available as a fuse mount option..
21:03 post-factum probably, you'd want to set up nfs ganesha on client and re-export gluster volume via nfs without losing HA
21:03 Jacob843 Oh cloph I didn't see you say that, let me try that real quick
21:04 cloph from what I've heared: lots of small file reads → nfs is a little faster, more writes → fuse client is better
21:20 Jacob843 Alright, cool. I mounted from the native client and wrote a small script so it does it automagically on boot.
21:30 jiffin joined #gluster
21:41 jiffin1 joined #gluster
21:46 guhcampos joined #gluster
22:06 logan- joined #gluster
22:38 F2Knight joined #gluster
23:37 Alghost joined #gluster
23:38 Alghost joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary