Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 fubada joined #gluster
00:14 cjanbanan joined #gluster
00:17 sputnik1_ joined #gluster
00:27 jcsp joined #gluster
00:29 pdrakeweb joined #gluster
00:29 dtrainor joined #gluster
00:38 PorcoAranha joined #gluster
00:38 PorcoAranha joined #gluster
00:41 LessSeen_ joined #gluster
00:44 LessSee__ joined #gluster
00:45 theron joined #gluster
00:50 pdrakeweb joined #gluster
00:52 kshlm joined #gluster
00:53 itisravi_ joined #gluster
01:01 cyberbootje joined #gluster
01:02 LessSeen_ joined #gluster
01:06 _Bryan_ joined #gluster
01:29 LessSeen_ joined #gluster
01:49 Peter3 joined #gluster
01:54 bharata-rao joined #gluster
02:04 haomai___ joined #gluster
02:29 theron joined #gluster
02:42 theron_ joined #gluster
02:59 gildub joined #gluster
03:11 bala joined #gluster
03:14 cjanbanan joined #gluster
03:18 kshlm joined #gluster
03:27 harish joined #gluster
03:31 saurabh joined #gluster
03:38 fubada joined #gluster
03:40 bala joined #gluster
03:43 atinmu joined #gluster
03:50 shubhendu joined #gluster
03:51 ramteid joined #gluster
03:57 nbalachandran joined #gluster
03:57 rjoseph joined #gluster
04:00 itisravi joined #gluster
04:11 atalur joined #gluster
04:14 cjanbanan joined #gluster
04:16 kumar joined #gluster
04:26 nishanth joined #gluster
04:28 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
04:32 Rafi_kc joined #gluster
04:32 anoopcs joined #gluster
04:32 ndarshan joined #gluster
04:33 RameshN joined #gluster
04:33 dtrainor joined #gluster
04:35 kanagaraj joined #gluster
04:43 kdhananjay joined #gluster
04:51 uebera|| joined #gluster
04:53 ppai joined #gluster
04:53 hagarth joined #gluster
04:55 raghu joined #gluster
04:59 JoeJulian @later tell StarBeast Since the trusted.afr numbers are equal and then go quickly away, it's fairly safe to assume there's nothing wrong. They're marked as pending, each brick acks its write, and the pending flag is cleared for that brick. Since both bricks are acking, both flags are clearing.
04:59 glusterbot JoeJulian: The operation succeeded.
05:03 karnan joined #gluster
05:06 Eric_HOU joined #gluster
05:06 Eric_HOU left #gluster
05:07 vpshastry joined #gluster
05:09 hchiramm joined #gluster
05:12 psharma joined #gluster
05:13 aravindavk joined #gluster
05:17 prasanth_ joined #gluster
05:18 RameshN joined #gluster
05:18 spandit joined #gluster
05:21 deepakcs joined #gluster
05:21 meghanam joined #gluster
05:21 atalur joined #gluster
05:28 rejy joined #gluster
05:32 dusmant joined #gluster
05:38 hagarth joined #gluster
05:40 karnan joined #gluster
05:44 sahina joined #gluster
05:45 cjanbanan joined #gluster
05:47 anoopcs joined #gluster
05:52 StarBeast joined #gluster
06:02 saurabh joined #gluster
06:02 harish_ joined #gluster
06:07 sahina joined #gluster
06:09 prasanth_ joined #gluster
06:10 shubhendu joined #gluster
06:11 lalatenduM joined #gluster
06:12 kshlm joined #gluster
06:28 anoopcs joined #gluster
06:29 coredump|br joined #gluster
06:30 T0aD- joined #gluster
06:30 vimal joined #gluster
06:30 overclk_ joined #gluster
06:31 systemonkey2 joined #gluster
06:31 natgeorg joined #gluster
06:33 bala joined #gluster
06:33 vpshastry1 joined #gluster
06:33 torbjorn1_ joined #gluster
06:33 cfeller_ joined #gluster
06:34 oxidane_ joined #gluster
06:34 stigchristian joined #gluster
06:34 muhh_ joined #gluster
06:34 lanning_ joined #gluster
06:34 the-me joined #gluster
06:34 fleducquede_ joined #gluster
06:34 ThatGraemeGuy joined #gluster
06:34 elico joined #gluster
06:35 cmtime joined #gluster
06:35 Guest95916 joined #gluster
06:35 marmalodak joined #gluster
06:35 psharma joined #gluster
06:35 balacafalata joined #gluster
06:35 irated joined #gluster
06:35 irated joined #gluster
06:35 fubada joined #gluster
06:35 ndarshan joined #gluster
06:35 kdhananjay joined #gluster
06:35 sahina joined #gluster
06:37 rejy joined #gluster
06:43 Paul-C joined #gluster
06:49 cjanbanan joined #gluster
06:51 ekuric joined #gluster
06:56 anoopcs joined #gluster
06:57 bala joined #gluster
06:57 shubhendu joined #gluster
07:00 anoopcs joined #gluster
07:01 deepakcs joined #gluster
07:04 LebedevRI joined #gluster
07:04 eseyman joined #gluster
07:04 RameshN joined #gluster
07:06 keytab joined #gluster
07:06 ctria joined #gluster
07:17 shubhendu joined #gluster
07:20 aravindavk joined #gluster
07:21 hagarth joined #gluster
07:21 rjoseph joined #gluster
07:29 mbukatov joined #gluster
07:37 monotek joined #gluster
07:38 Paul-C joined #gluster
07:43 fsimonce joined #gluster
07:44 ricky-ti1 joined #gluster
07:50 Pupeno joined #gluster
08:00 mbukatov joined #gluster
08:06 haomaiwa_ joined #gluster
08:06 liquidat joined #gluster
08:07 monotek joined #gluster
08:08 monotek left #gluster
08:11 rjoseph joined #gluster
08:18 glusterbot New news from newglusterbugs: [Bug 1120570] glustershd high memory usage on FreeBSD <https://bugzilla.redhat.co​m/show_bug.cgi?id=1120570>
08:19 haomaiw__ joined #gluster
08:19 Humble joined #gluster
08:20 aravindavk joined #gluster
08:25 hagarth joined #gluster
08:30 Pupeno_ joined #gluster
08:35 ppai joined #gluster
08:38 ghenry joined #gluster
08:38 ghenry joined #gluster
08:45 cjanbanan joined #gluster
08:45 haomaiwa_ joined #gluster
08:46 Pupeno joined #gluster
08:47 aravindavk joined #gluster
08:48 shubhendu joined #gluster
08:50 ndarshan joined #gluster
08:54 XpineX joined #gluster
08:56 ekuric joined #gluster
08:59 sahina joined #gluster
09:09 Pupeno_ joined #gluster
09:15 cjanbanan joined #gluster
09:19 aravindavk joined #gluster
09:19 karnan joined #gluster
09:22 cultavix joined #gluster
09:22 lyang0 joined #gluster
09:23 anoopcs joined #gluster
09:29 purpleidea JoeJulian: oh damn... well glad to know it's a pebkac problem at least ;)
09:33 Pupeno joined #gluster
09:34 ndarshan joined #gluster
09:35 shubhendu joined #gluster
09:38 sahina joined #gluster
09:40 RameshN joined #gluster
09:41 cjanbanan joined #gluster
09:48 itisravi_ joined #gluster
09:48 itisravi_ joined #gluster
09:49 richvdh joined #gluster
09:53 ira joined #gluster
09:59 gEEbusT hybrid512: you could try mount with "-o remount" but im not sure if it would work for the direct IO option
10:04 RameshN joined #gluster
10:09 haomaiwa_ joined #gluster
10:11 haomai___ joined #gluster
10:15 cjanbanan joined #gluster
10:15 ppai joined #gluster
10:15 Pupeno_ joined #gluster
10:21 ekuric1 joined #gluster
10:30 spandit joined #gluster
10:31 Pupeno joined #gluster
10:34 richvdh joined #gluster
10:39 itisravi joined #gluster
10:41 Pupeno joined #gluster
10:47 cyberbootje joined #gluster
10:49 glusterbot New news from newglusterbugs: [Bug 1120646] rfc.sh transfers patches with whitespace problems without warning <https://bugzilla.redhat.co​m/show_bug.cgi?id=1120646>
10:53 shubhendu joined #gluster
10:53 hagarth joined #gluster
10:53 bala joined #gluster
10:55 karnan joined #gluster
10:56 sahina joined #gluster
11:00 cyberbootje joined #gluster
11:01 ppai joined #gluster
11:04 prasanth_ joined #gluster
11:05 edward1 joined #gluster
11:09 cjanbanan joined #gluster
11:14 Pupeno I'm trying to mount a local volume at boot and I'm getting this error: 0-glusterfsd-mgmt: failed to connect with remote-host: koraga.example.com (No data available). Any ideas what might be going on?
11:14 spandit joined #gluster
11:15 julim joined #gluster
11:24 diegows joined #gluster
11:33 harish_ joined #gluster
11:36 cjanbanan joined #gluster
11:51 spandit joined #gluster
11:57 shubhendu joined #gluster
11:58 ira_ joined #gluster
11:58 marbu joined #gluster
12:12 itisravi_ joined #gluster
12:23 cultavix joined #gluster
12:30 giannello joined #gluster
12:35 MattAtL joined #gluster
12:35 ollybee joined #gluster
12:36 MattAtL Hi, I'm having trouble with iptables on a two-node, replica 2 setup, where each machine is a gluster server and client
12:37 MattAtL It all works fine with no iptables.  I've set them now according to docs on the interwebs, and with some tweaks am getting close
12:37 MattAtL setting iptables on node2, and node1 says the peer is in cluster (Connected)
12:38 MattAtL However, on node2, I can't access the mounted gluster fs (on the same machine) as a client
12:38 spandit joined #gluster
12:38 MattAtL ie something's stopping it access itself. And I can't work out what
12:38 MattAtL Any of the logs (iptables or gluster) aren't giving me any clues...
12:39 MattAtL I've got INPUT and OUTPUT ACCEPT on interface lo
12:39 MattAtL and the gluster server is mounted as localhost:/gluster_vol_name
12:40 MattAtL Does anyone have any ideas what might be going on? As I say, according to the other node it's online and connected
12:43 theron joined #gluster
12:44 theron joined #gluster
12:45 cjanbanan joined #gluster
12:48 nbalachandran joined #gluster
12:52 mbukatov joined #gluster
12:54 dockbram joined #gluster
12:54 MattAtL hmm, seems it may have needed a FORWARD rule on the lo interface
12:55 MattAtL Seems to be working now, anyway
12:56 kanagaraj joined #gluster
12:57 vu joined #gluster
12:57 pdrakeweb joined #gluster
13:02 rwheeler joined #gluster
13:03 hagarth joined #gluster
13:05 vu joined #gluster
13:06 vu_ joined #gluster
13:08 ctria joined #gluster
13:10 vu joined #gluster
13:12 vu joined #gluster
13:13 StarBeast joined #gluster
13:15 StarBeas_ joined #gluster
13:22 hchiramm_ joined #gluster
13:23 B21956 joined #gluster
13:29 firemanxbr joined #gluster
13:31 ppai joined #gluster
13:33 clutchk joined #gluster
13:34 plarsen joined #gluster
13:37 Pupeno Which one do you think is GlusterFS biggest competitor?
13:41 bene2 joined #gluster
13:41 T0aD joined #gluster
13:43 wjp joined #gluster
13:44 keytab joined #gluster
13:47 bala joined #gluster
13:48 sas_ joined #gluster
13:49 ollybee joined #gluster
13:51 Philambdo joined #gluster
13:54 Pupeno joined #gluster
13:55 nueces joined #gluster
13:59 stickyboy Pupeno: Lustre
13:59 Pupeno_ joined #gluster
14:00 sickness well the only one to have erasure codes and encryption as of now is tahoe-lafs ;)
14:00 sickness I think hadoop had hdfs+ / hdfs-raid but there isn't a stable code out there, maybe facebook has that code...
14:01 wjp Hi, I'm running into unexpected behaviour with symlinks on a glusterfs volume (3.5.1 on Arch Linux), where sometimes accessing a symlink on the volume results in ENOENT (No such file or directory).
14:01 wjp I have some (shell and C) reproduction recipes here: http://www.usecode.org/misc/glusterfs_symlink.txt . Is this indeed unexpected, and if so, what would be a good next step to investigate this?
14:02 stickyboy wjp: Wow, good work.  Probably to file a bug actually.
14:02 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
14:03 stickyboy I'd send a report to the mailing list, someone will bite.
14:08 Pupeno joined #gluster
14:08 japuzzo joined #gluster
14:09 wjp thanks, will start with a ML post then
14:11 _Bryan_ joined #gluster
14:12 ctria joined #gluster
14:15 semiosis Pupeno: i got your email.  please pastie.org your client log file showing the failed mount attempt during boot.
14:15 wushudoin joined #gluster
14:15 chirino joined #gluster
14:16 semiosis purpleidea: never mind the log, i see you included it in the SO post, http://serverfault.com/questions/611462/gluste​rfs-failing-to-mount-at-boot-with-ubuntu-14-04
14:16 glusterbot Title: GlusterFS failing to mount at boot with Ubuntu 14.04 - Server Fault (at serverfault.com)
14:16 StarBeast joined #gluster
14:16 semiosis Pupeno: ping me when you're around so we can talk about it
14:17 mortuar joined #gluster
14:19 ira joined #gluster
14:19 anoopcs joined #gluster
14:21 * Pupeno is back.
14:21 Pupeno Hello semiosis.
14:22 semiosis hi
14:22 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:22 semiosis hahaha
14:22 Pupeno On top of everything, my internet connection is unstable, so, please, bear with me if I suddenly ping-timeout.
14:22 semiosis Pupeno: got it
14:23 semiosis Pupeno: is 192.168.134.227 the local machine?
14:23 Pupeno Yes, it is.
14:23 semiosis ah, so the mount is tried before the local glusterfs-server has started
14:24 Pupeno semiosis: I tried modifying mounting-glusterfs.conf to wait for glusterfs-server, but it didn't work. Furthermore, on my other machine, it just works.
14:24 semiosis a couple notes... _netdev doesnt do anything on ubuntu, so you might as well remove that from your fstab
14:24 semiosis second, you should use 'nobootwait' in fstab, in case there's a problem with the mount, the boot can continue (this is why it hung on boot)
14:25 Pupeno semiosis: if I remove _netdev, ubuntu tries to mount it just after mounting root and hangs in there forever.
14:25 semiosis hmm interesting
14:25 Pupeno Ok, you asked for some logs.
14:26 semiosis i'll have to double check _netdev in trusty.
14:26 semiosis but you really should use nobootwait
14:26 Pupeno Yeah, I'll do that.
14:26 semiosis that wont fix the problem but will prevent hung boot
14:28 Pupeno semiosis: https://gist.github.com/pu​peno/db7c6e7d216974dca1db
14:28 glusterbot Title: var-www-shared-public-uploads.log (at gist.github.com)
14:28 semiosis yeah i got that from your stack exchange post after i asked for it
14:29 kshlm joined #gluster
14:30 Pupeno Ok... I added nobootwait and now everything is working.
14:31 Pupeno It mounted the volumes at boot.
14:31 semiosis well, that could just be a conincidence.  try rebooting a few times (5?) and see if it works every time
14:32 sputnik1_ joined #gluster
14:32 semiosis i cant see how that would fix it
14:33 theron joined #gluster
14:34 Pupeno Sure.
14:36 Pupeno second reboot, it worked.
14:36 theron joined #gluster
14:36 plarsen joined #gluster
14:36 Pupeno third boot, it did not work.
14:37 harish_ joined #gluster
14:39 bala joined #gluster
14:39 Pupeno After a little while, it mounted them.
14:39 semiosis hmm that's interesting
14:40 cyberbootje joined #gluster
14:40 Pupeno Well, upstart is supposed to be asyncronous.
14:41 ctria joined #gluster
14:41 semiosis yes, thats why it works sometimes.  but i wonder why after failing it would then mount after a little while
14:42 eryc joined #gluster
14:42 richvdh joined #gluster
14:44 semiosis Pupeno: try modifying /etc/init/glusterfs-server.conf to start on (runlevel [2345] or mounting TYPE=glusterfs)
14:44 semiosis Pupeno: i've never tried that specific condition, but i think it should work
14:45 Pupeno Fourth boot behaved the same.
14:45 cjanbanan joined #gluster
14:45 semiosis bah, serverfault wont let me comment :(
14:46 semiosis i hope i win that bounty, i need the points!
14:46 dockbram joined #gluster
14:48 Pupeno Trying that.
14:48 Pupeno semiosis: if someone can get those points, thats you ;)
14:49 semiosis if that new condition works for you i'll add it to the packages
14:50 * Pupeno is waiting for reboot.
14:50 semiosis again, try several to be sure
14:50 Pupeno No, it's not mounted.
14:50 semiosis did glusterfs-server start?
14:50 Pupeno After a little while, they got mounted, like before.
14:51 Pupeno Rebooting again.
14:51 semiosis well, is this delay acceptable?
14:51 Pupeno I'll check the status of glusterfs-server as soon as I can ssh in.
14:51 semiosis (i wish i could explain it)
14:51 Pupeno Yes, glusterfs-server seems to be running.
14:52 semiosis in all my testing the static-network-up event always happens after glusterfs-server has started.  i test on virtualbox vms and ec2 instances.
14:52 DV__ joined #gluster
14:52 semiosis and there's always a delay between when i can ssh in and when the mount happens, that's normal
14:52 Pupeno The delay is certainly acceptable, but since it's of unpredictable length (as far as I know), I need to make sure my own software starts after mounting, as it depends on those volumes being mounted.
14:53 semiosis during that time you can see the wait-for-state process holding the mount, if you do ps ax | grep gluster
14:53 Pupeno I have an open serial console to that computer, and I can essentially ssh more or less when the login prompt appear, which would signal it finished booting (kinda... hard to know with the asyncronous upstart).
14:53 semiosis however i dont get any error in the log, since there's only one mount attempt, which succeeds, though after the delay
14:54 coredump joined #gluster
14:54 semiosis i've found you can often login before the boot has finished
14:54 Pupeno Yeah.
14:54 semiosis some services (tomcat, for one) can take a while to start up, but i can ssh in before they finish
14:55 haomaiwang joined #gluster
14:55 semiosis you could upstartify your software and have it start on the mounted event :)
14:55 SpComb joined #gluster
14:56 Pupeno semiosis: I rebooted and indeed I could see the wait-for-state.
14:56 semiosis thats what i do for stuff that has to start after the mount
14:56 Pupeno My software is already upstartified :) it's just a matter of adding the right dependencies.
14:56 semiosis mounted MOUNTPOINT=/the/path
14:57 semiosis should do it
14:57 liquidat_ joined #gluster
14:57 Pupeno Is it possible that _netdev is actually having an effect?
14:57 semiosis sure it's possible
14:57 chirino_m joined #gluster
14:58 Pupeno Well, I'm happy now :) Thanks semiosis! Feel free to post the answer to serverfault, I'll accept it now.
14:58 semiosis woo!
14:58 oxidane joined #gluster
14:58 stigchri1tian joined #gluster
14:59 deepakcs_ joined #gluster
14:59 semiosis now i wonder if changing the glusterfs-server.conf helped
14:59 Pupeno semiosis: did you see the other answer to the question? what do you think about WAIT_FOR=networking instead of static-network-up?
15:00 semiosis yes i will comment on that once i have points
15:00 semiosis WAIT_FOR needs an event, but networking is a service
15:00 dockbram_ joined #gluster
15:00 Pupeno Shall I re-install the packages configs so we are sure my init files are pristine and try again?
15:00 semiosis static-network-up is an event.  'started networking' is an event
15:00 semiosis yes that would be great
15:01 semiosis see 'man upstart-events' for a discussion of events in the boot sequence
15:01 Pupeno :/etc/init# rm *gluster*
15:01 Pupeno just to be sure.
15:01 semiosis nice
15:02 Pupeno If I uninstall and purge gluster, will I lose the bricks?
15:02 Pupeno bricks/volumes
15:02 recidive joined #gluster
15:03 ollybee joined #gluster
15:03 Pupeno aptitude purge glusterfs-client glusterfs-common glusterfs-server
15:03 Pupeno It's a test server anyway.
15:04 the-me joined #gluster
15:06 marmalodak joined #gluster
15:06 lyang0 joined #gluster
15:06 fubada joined #gluster
15:06 Pupeno semiosis: pristine installation, just the change to fstab, and it works.
15:06 semiosis wow
15:06 fleducquede_ joined #gluster
15:06 semiosis ok
15:07 MattAtL joined #gluster
15:07 semiosis do you still get the connection refused in the log?
15:07 dockbram joined #gluster
15:07 semiosis and can you try a few more reboots just to be sure?
15:07 hagarth joined #gluster
15:07 bene2 joined #gluster
15:07 bene2 joined #gluster
15:07 Pupeno Checking.
15:07 elico joined #gluster
15:08 semiosis ah hm.
15:08 ekuric joined #gluster
15:09 semiosis so my understanding of wait-for-state was all wrong.  it *does* need a service, not an event, just as the commenter said. so since i'm waiting on a non-existent service, it waits for the default of 30 seconds.
15:09 semiosis woops
15:09 Pupeno No errors... not even warnings in the logs.
15:09 Pupeno Ah... that's the delay then.
15:10 semiosis yep
15:10 Pupeno Would you like me to try changing that?
15:10 semiosis sure!
15:11 Pupeno Trying this:
15:11 Pupeno exec start wait-for-state WAIT_FOR=networking WAITER=mounting-glusterfs-$MOUNTPOINT
15:12 Pupeno Now I got the error.
15:13 Pupeno So, the 30 seconds are allowing glustfs-server to start.
15:13 semiosis the name wait-for-state is deceiving.  that suggests an event to me.  and besides, on a running system 'status networking' says stopped
15:13 semiosis so what is that even?
15:13 semiosis no one understands upstart
15:14 Pupeno I'm trying start on (runlevel [2345] or mounting TYPE=glusterfs) on glusterfs-server.conf
15:14 anoopcs joined #gluster
15:14 semiosis that still might not work, in which case you should add a post-start script stanza with sleep 5 :/
15:15 semiosis because glusterfs-server daemonizes (a very short time) before it is ready for connections
15:15 cjanbanan joined #gluster
15:15 Pupeno I actually have two volumes... one was mounted the other was not :S
15:15 theron joined #gluster
15:16 semiosis yep
15:17 Pupeno What should the post-start stanza look like? I'm sorry, I'm new to upstart
15:17 semiosis http://upstart.ubuntu.com/cookbook/#post-start
15:17 StarBeast joined #gluster
15:17 glusterbot Title: Upstart Intro, Cookbook and Best Practises (at upstart.ubuntu.com)
15:17 semiosis post-start script
15:17 semiosis sleep 5
15:17 semiosis end script
15:18 Pupeno Still only one volume got mounted. Weird. brb
15:19 semiosis k
15:21 Pupeno Oh... the other volume got mounted.
15:22 semiosis errors in the logs?
15:23 keytab joined #gluster
15:23 Pupeno One volume gets mounted really quickly, the other one seems to take a while, but I don't see the wait-for-state processes.
15:24 Pupeno Rebooting for logs now.
15:24 anoopcs1 joined #gluster
15:27 Pupeno semiosis: for one of the volumes I got errors and warnings (different ones), the other is clean. Maybe 5 seconds is not enough, the first mount fails, takes a while, gets retried, it works and then the second just works.
15:27 * Pupeno tries again with 15 seconds.
15:28 semiosis do you still have _netdev?  it could be causing the mounts to be retried
15:28 semiosis iirc
15:29 Pupeno No, I have nobootwait
15:29 semiosis ok
15:29 Pupeno 15 seconds seemed to work, trying again.
15:29 tdasilva joined #gluster
15:29 semiosis odd
15:31 Pupeno And now, no errors or warnings on the logs.
15:33 semiosis does wait-for-state still take 30s?
15:35 ndk joined #gluster
15:35 semiosis also can you check the glusterfs brick logs in /var/log/glusterfs/bricks to see if they match the <15 second delay?
15:35 anoopcs joined #gluster
15:35 Pupeno Just a sec... something urgent came up :S
15:36 anoopcs joined #gluster
15:40 jiffin joined #gluster
15:41 lmickh joined #gluster
15:41 dockbram_ left #gluster
15:41 dockbram_ joined #gluster
15:43 vu joined #gluster
15:49 Pupeno semiosis: it seems the mount happened 9 seconds after gluster was ready, so, 5 seconds was almost enough... 6 or 7 should do it.
15:53 Pupeno semiosis: does this look good to you? for my own software: start on filesystem and runlevel [2345] and mounted MOUNTPOINT=/var/www/shared/public/uploads and mounted MOUNTPOINT=/var/www/shared/private/uploads
15:53 ctria joined #gluster
15:54 diegows joined #gluster
15:55 dockbram joined #gluster
15:57 bram_ joined #gluster
16:04 clutchk joined #gluster
16:11 semiosis Pupeno: runlevel [2345] seems redundant with all that other stuff
16:11 semiosis Pupeno: but what really counts is does it work
16:11 Pupeno It seems to wokr.
16:11 Pupeno work.
16:12 sputnik1_ joined #gluster
16:16 Pupeno semiosis: Are you going to make changes to the packages based on this? can I help?
16:17 semiosis yes i'm going to update, going to do some tests of my own first though
16:18 StarBeast joined #gluster
16:19 Pupeno Let me know if you want me to test things. I have both a VirtualBox and a Linode Ubuntu 14.04.
16:19 semiosis thanks i will
16:19 Pupeno Do you have my email address?
16:24 semiosis yes
16:27 Pupeno semiosis: ok, feel free to ping there or here. And thanks for the help, I really appreciate it.
16:27 semiosis great, thanks!
16:34 theron joined #gluster
16:36 theron joined #gluster
16:37 dockbram joined #gluster
16:47 Pupeno semiosis: will you post the answer to http://serverfault.com/questions/611462/gluste​rfs-failing-to-mount-at-boot-with-ubuntu-14-04 ?
16:47 glusterbot Title: GlusterFS failing to mount at boot with Ubuntu 14.04 - Server Fault (at serverfault.com)
16:48 semiosis yes i will soon but real busy at the moment & want to take time to write it up completely
16:48 semiosis when i take a lunch break in a little bit
16:48 Pupeno semiosis: I understand, no problem.
16:49 cjanbanan joined #gluster
16:50 stickyboy Mounting at boot is a serious problem. :)
16:50 stickyboy Many years ... many distros.
16:50 anoopcs joined #gluster
16:51 semiosis tell me about it!
17:04 vu joined #gluster
17:06 vu joined #gluster
17:14 JoeJulian Heh, guess I get to change ,,(Joe's performance metric).
17:14 glusterbot nobody complains.
17:14 JoeJulian "If you could start from scratch with 2 racks of gear, how would you design gluster for peak performance and what is the "theoretical" performance we could expect?"
17:15 ThomasG_ joined #gluster
17:19 StarBeast joined #gluster
17:19 doo joined #gluster
17:22 semiosis JoeJulian: great interview question!
17:23 JoeJulian Would be if it could be sent home as homework.
17:24 JoeJulian Requires a number of assumptions, too.
17:27 nbalachandran joined #gluster
17:37 Peter3 joined #gluster
17:39 JoeJulian Well, at least I'll have my next presentation for when I'm asked.
17:42 rotbeard joined #gluster
17:42 R0ok_ joined #gluster
17:45 cjanbanan joined #gluster
17:46 bit4man joined #gluster
17:49 LessSeen_ joined #gluster
17:53 semiosis JoeJulian: be a youtube star!
17:54 JoeJulian Heh, that would make johnmark's day wouldn't it.
17:55 y4m4 joined #gluster
18:02 semiosis i'd like to see it.  heck i'd even participate to throw you some softball questions
18:03 bala joined #gluster
18:06 jobewan joined #gluster
18:07 calum_ joined #gluster
18:08 StarBeast joined #gluster
18:10 R0ok_ left #gluster
18:10 james_ joined #gluster
18:14 R0ok_ joined #gluster
18:15 cjanbanan joined #gluster
18:19 iheartwhiskey joined #gluster
18:20 Pupeno joined #gluster
18:21 iheartwhiskey hi i have a problem with my gluster installation. i am running volume heal and it doesn't not seem to be finishing healing.  the number of entries keeps boucning around between my servers. has anyone seen anything like that before?
18:24 JoeJulian iheartwhiskey:  It's probably just an artifact of having busy files. As a file is modified, extended attributes are set showing the pending write. When the write to each replica is completed, the pending flag is decremented for that brick, normally returning the pending count to 0. If a brick goes offline, the write is never acknowledged so the pending flag is not decremented. That flag shows GlusterFS that there's a heal needed.
18:25 JoeJulian If you happen to query the ,,(extended attributes) during that write, one or more will show pending status. "gluster volume heal $vol info" will report that flag as needing healed.
18:25 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
18:25 JoeJulian In theory, that false positive is fixed in 3.5.1.
18:28 iheartwhiskey ok, so i ran volume info split-brain and everything is zero, but i have several entries for just one of my nodes as needing to be healed. volume heal nova_hdd info heal-failed show's failures on one of my nodes, but none of these files match what is being bounced around when I run heal info
18:28 iheartwhiskey could this be the false positive issue? is there away i can recover from this state?
18:29 R0ok_ joined #gluster
18:33 JoeJulian It sounds like it is. If you followed my explanation you can see that there is no "recovery" necessary. You can double check by looking at the extended attributes as referenced in the above article.
18:33 JoeJulian bbiab
18:37 R0ok_ joined #gluster
18:38 chirino joined #gluster
18:38 iheartwhiskey awsome thanks Joe.
18:38 _dist joined #gluster
18:39 vu joined #gluster
18:44 Matthaeus joined #gluster
18:45 cjanbanan joined #gluster
18:51 glusterbot New news from newglusterbugs: [Bug 1120815] df reports incorrect space available and used. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1120815>
18:52 LessSeen_ joined #gluster
18:53 MacWinner joined #gluster
18:55 MattAtL left #gluster
19:02 Peter3 what are these mesg or lock means?
19:02 Peter3 http://pastie.org/9400753
19:02 glusterbot Title: #9400753 - Pastie (at pastie.org)
19:04 chirino joined #gluster
19:04 n0de_ Hey guys, I have a ton of content on my Gluster storage which is no longer needed. I need to rm -Rf all of it, doing this from the Gluster fuse mount will take a long time I am sure. Any harm in removing the data directly from the bricks?
19:04 n0de_ (I have no care about the content)
19:05 semiosis n0de_: you should remove it through the client mount
19:06 n0de_ This I know, but I was wondering what will happen if I just remove from the bricks directly. I know this is not the way Gluster is designed to work, just curious on the background mechanics
19:07 Pupeno joined #gluster
19:07 n0de_ Gluster will see that the files are missing on the bricks, but will the links still remain to those files within .glusterfs/ dir?
19:08 semiosis only way to know is to try
19:09 jonar1 joined #gluster
19:12 penglish joined #gluster
19:13 penglish Hiya folks, one of my users is reporting strange permissions problems. He says he'll find he has ownership but not permissions on files/directory trees, fix them, and then they'll revert
19:13 penglish Could this be a symptom of split brain?
19:14 semiosis split brain would be explicitly reported in the client log file
19:14 semiosis istr something like this recently, let me check the logs
19:15 semiosis penglish: https://botbot.me/freenode/gluster/​2014-06-30/?msg=17218288&amp;page=1 -- see discussion there
19:15 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
19:16 n0de_ semiosis: Right, I figured split log will show up, but I can squash that.
19:17 penglish semiosis: ty, reading now
19:17 semiosis penglish: actually it's not as helpful as i thought it would be
19:17 n0de_ We shall see I guess, I fear I will use up a massive amount of resources removing the huge amount of data via the gluster mount
19:17 theron joined #gluster
19:18 semiosis penglish: permissions set through the client mount point should stick.  if you set perms on the backend brick filesystem then gluster will probably "heal" those back to what the permissions are on the replica
19:19 penglish In this case, technically the person is working through a "client mount point" - it just happens to be mounted (with FUSE) on one of the brick hosts
19:20 semiosis penglish: i'd double/triple check to make sure they dont have the brick & client mount paths confused
19:20 semiosis because that would explain things nicely :)
19:20 penglish good point
19:23 penglish "now" at least, he does not have any file handles open under the brick directories
19:23 penglish But he does under the fuse mount. :-)
19:23 penglish So that is reassuring
19:25 penglish Okay.. that is a little strange
19:25 penglish peer status shows everything okay
19:26 penglish but volume status shows one of the bricks as being N/A
19:26 penglish That…. could be problematic
19:28 vu joined #gluster
19:29 penglish i/o error on the brick's file system. That would do it
19:32 LessSeen_ joined #gluster
19:33 semiosis interesting
19:35 penglish xfs i/o errors… xfs_repair -L required
19:35 penglish *yey*
19:35 penglish fortunately these are all mirrored, so full sanity may be possible
19:44 cjanbanan joined #gluster
19:44 chirino joined #gluster
19:49 calum_ joined #gluster
19:52 Peter3 What does these msg means? E [server-rpc-fops.c:1076:server_unlink_cbk] ?
19:53 dberry joined #gluster
19:53 Peter3 http://pastie.org/9400850
19:53 glusterbot Title: #9400850 - Pastie (at pastie.org)
20:02 dberry is it possible to mount from 2 different replica servers
20:03 dberry can I have machineA mount gfs1.example.com  and machineB mount gfs2.example.com if they are replicas?
20:03 jonar1 yes
20:08 kshlm joined #gluster
20:10 LessSee__ joined #gluster
20:13 theron joined #gluster
20:15 cjanbanan joined #gluster
20:22 Pupeno joined #gluster
20:37 zerick joined #gluster
20:41 doo joined #gluster
20:46 kshlm joined #gluster
20:48 calum_ joined #gluster
20:49 semiosis dberry: ,,(mount server)
20:49 glusterbot dberry: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
20:49 semiosis dberry: ,,(rrdns)
20:49 glusterbot dberry: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
20:49 dberry thanks
20:51 andreask joined #gluster
20:54 chirino joined #gluster
21:09 JoeJulian What kind of data do you get from "gluster volume profile"?
21:11 JoeJulian Peter3: Hard to say since that's the client log. Look for corresponding errors in the brick log.
21:14 JoeJulian answered my own question: http://gluster.org/community/documentation/index​.php/Gluster_3.2:_Displaying_the_I/0_Information
21:14 glusterbot Title: Gluster 3.2: Displaying the I/0 Information - GlusterDocumentation (at gluster.org)
21:14 JoeJulian I wonder what the latency is measured in.
21:17 theron joined #gluster
21:17 _dist is there anyway to monitor iops per file or connection with gluster? I can't do it by pid because running VMs for example do their writes through gluster
21:17 cicero kilowaits
21:23 JoeJulian @later tell _dist This page suggests you can monitor the iops per file using systemtap: http://sourceware.org/systemtap/​examples/keyword-index.html#FILE
21:23 glusterbot JoeJulian: The operation succeeded.
21:34 penglish1 joined #gluster
21:37 hagarth joined #gluster
21:38 JoeJulian Hello hagarth
21:40 penglish joined #gluster
22:01 nueces joined #gluster
22:07 stickyboy joined #gluster
22:07 stickyboy joined #gluster
22:13 sputnik1_ joined #gluster
22:15 cjanbanan joined #gluster
22:18 Pupeno joined #gluster
22:23 hagarth joined #gluster
22:26 Pupeno joined #gluster
22:39 zerick joined #gluster
22:50 eshy joined #gluster
22:51 ThomasG_ joined #gluster
22:53 StarBeast joined #gluster
22:53 bit4man joined #gluster
22:53 dockbram joined #gluster
22:53 lmickh joined #gluster
22:53 fleducquede_ joined #gluster
22:53 the-me joined #gluster
22:53 harish_ joined #gluster
22:53 Philambdo joined #gluster
22:53 mbukatov joined #gluster
22:53 muhh_ joined #gluster
22:53 overclk_ joined #gluster
22:53 social joined #gluster
22:53 baoboa joined #gluster
22:53 verdurin joined #gluster
22:53 avati joined #gluster
22:53 sputnik13 joined #gluster
22:53 ws2k3 joined #gluster
22:53 siel joined #gluster
22:53 ndevos joined #gluster
22:53 neoice joined #gluster
22:53 purpleidea joined #gluster
22:53 codex joined #gluster
22:53 SNow joined #gluster
22:53 Ramereth joined #gluster
22:53 mkzero_ joined #gluster
22:53 samsaffron joined #gluster
22:53 morse joined #gluster
22:53 capri joined #gluster
22:53 Andreas-IPO joined #gluster
22:53 zerick joined #gluster
22:53 hagarth joined #gluster
22:53 calum_ joined #gluster
22:53 fubada joined #gluster
22:53 ollybee joined #gluster
22:53 T0aD joined #gluster
22:53 firemanxbr joined #gluster
22:53 [o__o] joined #gluster
22:53 tty00 joined #gluster
22:53 twx joined #gluster
22:53 and` joined #gluster
22:53 lava joined #gluster
22:53 churnd joined #gluster
22:53 RobertLaptop joined #gluster
22:53 gts__ joined #gluster
22:54 ackjewt joined #gluster
22:54 samppah joined #gluster
22:57 VeggieMeat joined #gluster
22:57 troj joined #gluster
22:57 SteveCooling joined #gluster
22:57 radez_g0n3 joined #gluster
22:57 jiffe98 joined #gluster
22:57 TheSov joined #gluster
22:57 huleboer joined #gluster
22:57 l0uis joined #gluster
22:57 lezo_ joined #gluster
22:57 abyss__ joined #gluster
22:57 weykent joined #gluster
22:57 atrius joined #gluster
22:57 cicero joined #gluster
22:57 glusterbot joined #gluster
22:57 crashmag joined #gluster
22:57 fyxim_ joined #gluster
22:57 gmcwhistler joined #gluster
22:57 systemonkey2 joined #gluster
22:57 lanning_ joined #gluster
22:57 samppah joined #gluster
22:57 ackjewt joined #gluster
22:57 gts__ joined #gluster
22:57 RobertLaptop joined #gluster
22:57 churnd joined #gluster
22:57 lava joined #gluster
22:57 and` joined #gluster
22:57 twx joined #gluster
22:57 tty00 joined #gluster
22:57 [o__o] joined #gluster
22:57 firemanxbr joined #gluster
22:57 T0aD joined #gluster
22:57 ollybee joined #gluster
22:57 fubada joined #gluster
22:57 calum_ joined #gluster
22:57 hagarth joined #gluster
22:57 Andreas-IPO joined #gluster
22:57 capri joined #gluster
22:57 morse joined #gluster
22:57 samsaffron joined #gluster
22:57 mkzero_ joined #gluster
22:57 Ramereth joined #gluster
22:57 SNow joined #gluster
22:57 codex joined #gluster
22:57 purpleidea joined #gluster
22:57 neoice joined #gluster
22:57 ndevos joined #gluster
22:57 siel joined #gluster
22:57 ws2k3 joined #gluster
22:57 sputnik13 joined #gluster
22:57 avati joined #gluster
22:57 verdurin joined #gluster
22:57 baoboa joined #gluster
22:57 social joined #gluster
22:57 overclk_ joined #gluster
22:57 muhh_ joined #gluster
22:57 mbukatov joined #gluster
22:57 Philambdo joined #gluster
22:57 harish_ joined #gluster
22:57 the-me joined #gluster
22:57 fleducquede_ joined #gluster
22:57 lmickh joined #gluster
22:57 dockbram joined #gluster
22:57 bit4man joined #gluster
22:57 StarBeast joined #gluster
22:57 ThomasG_ joined #gluster
22:57 eshy joined #gluster
22:57 Pupeno joined #gluster
22:57 stickyboy joined #gluster
22:57 nueces joined #gluster
22:57 penglish joined #gluster
22:57 andreask joined #gluster
22:57 kshlm joined #gluster
22:57 doo joined #gluster
22:57 MacWinner joined #gluster
22:57 y4m4 joined #gluster
22:57 Peter3 joined #gluster
22:57 clutchk joined #gluster
22:57 diegows joined #gluster
22:57 ndk joined #gluster
22:57 tdasilva joined #gluster
22:57 elico joined #gluster
22:57 lyang0 joined #gluster
22:57 marmalodak joined #gluster
22:57 stigchri1tian joined #gluster
22:57 oxidane joined #gluster
22:57 SpComb joined #gluster
22:57 haomaiwang joined #gluster
22:57 DV__ joined #gluster
22:57 richvdh joined #gluster
22:57 eryc joined #gluster
22:57 cyberbootje joined #gluster
22:57 plarsen joined #gluster
22:57 wushudoin joined #gluster
22:57 _Bryan_ joined #gluster
22:57 wjp joined #gluster
22:57 hchiramm_ joined #gluster
22:57 rwheeler joined #gluster
22:57 julim joined #gluster
22:57 XpineX joined #gluster
22:57 Humble joined #gluster
22:57 fsimonce joined #gluster
22:57 balacafalata joined #gluster
22:57 cmtime joined #gluster
22:57 ThatGraemeGuy joined #gluster
22:57 cfeller_ joined #gluster
22:57 torbjorn1_ joined #gluster
22:57 edong23 joined #gluster
22:57 sickness joined #gluster
22:57 tom[] joined #gluster
22:57 dblack joined #gluster
22:57 simulx2 joined #gluster
22:57 edwardm61 joined #gluster
22:57 Gugge joined #gluster
22:57 qdk joined #gluster
22:57 necrogami joined #gluster
22:57 masterzen joined #gluster
22:57 gEEbusT joined #gluster
22:57 sauce joined #gluster
22:57 sman joined #gluster
22:57 sage joined #gluster
22:57 swebb joined #gluster
22:57 xavih joined #gluster
22:57 anotheral joined #gluster
22:57 Intensity joined #gluster
22:57 jiqiren joined #gluster
22:57 atrius` joined #gluster
22:57 tg2 joined #gluster
22:57 n0de_ joined #gluster
22:57 mAd-1 joined #gluster
22:57 Norky joined #gluster
22:57 portante joined #gluster
22:57 johnmark joined #gluster
22:57 asku joined #gluster
22:57 hflai joined #gluster
22:57 coreping joined #gluster
22:57 vincent_vdk joined #gluster
22:57 sspinner joined #gluster
22:57 _jmp__ joined #gluster
22:57 d-fence joined #gluster
22:57 saltsa joined #gluster
22:57 tomased joined #gluster
22:57 jezier joined #gluster
22:57 carrar joined #gluster
22:57 msciciel joined #gluster
22:57 VerboEse joined #gluster
22:57 Slasheri_ joined #gluster
22:57 johnmwilliams__ joined #gluster
22:57 m0zes joined #gluster
22:57 yosafbridge joined #gluster
22:57 Alex joined #gluster
22:57 semiosis joined #gluster
22:57 sadbox joined #gluster
22:57 Nopik joined #gluster
22:57 eightyeight joined #gluster
22:57 JordanHackworth joined #gluster
22:57 pasqd joined #gluster
22:57 osiekhan3 joined #gluster
22:57 nixpanic_ joined #gluster
22:57 NCommander joined #gluster
22:57 _NiC joined #gluster
22:57 Dave2 joined #gluster
22:57 marcoceppi joined #gluster
22:57 foster joined #gluster
22:57 kke joined #gluster
22:57 fraggeln joined #gluster
22:57 partner joined #gluster
22:57 tziOm joined #gluster
22:57 NuxRo joined #gluster
22:57 ultrabizweb joined #gluster
22:57 JonathanD joined #gluster
22:57 Kins joined #gluster
22:57 delhage joined #gluster
22:57 tru_tru joined #gluster
22:57 Bardack joined #gluster
22:57 Ch3LL_ joined #gluster
22:57 bfoster joined #gluster
22:57 mibby joined #gluster
22:57 DanF joined #gluster
22:57 gomikemi1e joined #gluster
22:57 decimoe joined #gluster
22:57 DJClean joined #gluster
22:57 lkoranda joined #gluster
22:57 mshadle joined #gluster
22:57 sijis joined #gluster
22:57 JoeJulian joined #gluster
22:57 sac`away joined #gluster
22:57 silky joined #gluster
22:57 klaas joined #gluster
22:57 jvandewege joined #gluster
22:57 mdavidson joined #gluster
22:57 prasanth|offline joined #gluster
22:57 eclectic joined #gluster
22:57 gehaxelt joined #gluster
22:57 ccha2 joined #gluster
22:57 FooBar joined #gluster
22:57 zerick joined #gluster
22:57 gildub joined #gluster
22:57 samkottler joined #gluster
22:58 cfeller joined #gluster
23:00 jiqiren joined #gluster
23:01 dblack joined #gluster
23:02 Intensity joined #gluster
23:03 Guest92729 joined #gluster
23:03 dblack joined #gluster
23:08 churnd joined #gluster
23:18 andreask joined #gluster
23:18 andreask joined #gluster
23:24 n0de joined #gluster
23:29 dtrainor joined #gluster
23:38 Paul-C joined #gluster
23:42 gEEbusT Are extended attributes supported on gluster FUSE mounts?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary