Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 rwheeler joined #gluster
00:32 mlody joined #gluster
00:33 mlody hey, I have 2 dedicated servers in OVH. Would it be ok to create on top of them a glusterfs? The newtwork speed isnt really big, but I also wont have lot of files (some simple static webcontent mainly)
00:35 mlody I am asking because I already deployed a 2 volumes. Listings are quite slow. And my webpanel is very slow on gluster.
00:36 mlody Currently one of my volume have 6088 files and 80megs.. And my webpage is loading several seconds
00:37 virusuy joined #gluster
00:37 virusuy joined #gluster
00:37 mlody time of du on this mounted volume is also very long real    0m5.899s user    0m0.034s  sys     0m0.136s
00:38 maZtah joined #gluster
00:38 haomaiwa_ joined #gluster
00:40 prg3 joined #gluster
00:40 gildub joined #gluster
00:42 sadbox joined #gluster
00:44 Jmainguy mlody: are they both in the same datacenter?
00:44 mlody Jmainguy, Yes.
00:44 mlody ping is below 1ms
00:44 Jmainguy nice
00:44 mlody I think the bandwidth might be limited to 100/150mbit
00:45 Jmainguy drive speed is the limiting factor I guess, but yeah, if ping is below 1ms, it should work as expected
00:45 Jmainguy I did the same setup in hetzner germany
00:45 Jmainguy its not blazing fast, but hetzner drives were slow to begin with
00:45 mlody I will try installing varnish
00:45 mlody in front of
00:45 mlody apache
00:45 Jmainguy mlody: sounds solid
00:45 mlody it should help me
00:46 Jmainguy mlody: did you read the article on paper.com? about kim kardashian
00:46 mlody no :D
00:46 csim it is not really on kim kardashian more than the infrastructure of the website when photos of her were posted :)
00:46 Jmainguy https://medium.com/message/how-paper-magazines-web-engineers-scaled-kim-kardashians-back-end-sfw-6367f8d37688
00:46 Jmainguy its a pretty good read, its what inspired me to give gluster a shot for my new project
00:47 Jmainguy the guy uses varnish as well
00:47 prg3 joined #gluster
01:02 kenansulayman joined #gluster
01:04 ninkotech__ joined #gluster
01:15 jvandewege_ joined #gluster
01:19 ninkotech__ joined #gluster
01:27 ninkotech__ joined #gluster
01:30 ashiq joined #gluster
01:32 mlody Jmainguy, hmm. I have apache config on gluster volume mounted as nfs locally on server. And the apacheis starting and stopping ~15 seconds
01:32 mlody it is hell slow
01:32 mlody is gluster that slow?:o
01:37 mlody and the httpd worker is in D state
01:37 mlody :{
01:40 Pupeno joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:04 cholcombe joined #gluster
02:07 ninkotech__ joined #gluster
02:11 Gill joined #gluster
02:21 daMaestro joined #gluster
02:23 harish joined #gluster
02:25 kdhananjay joined #gluster
02:32 nangthang joined #gluster
02:34 side_control joined #gluster
02:35 Jmainguy mlody: not sure, I am new to it myself
02:35 Jmainguy so far only tried it on hetzner servers, where the drives were slow to begin with
02:36 mlody Jmainguy, oh:)
02:36 Jmainguy so, my performance is slow as well
02:36 Jmainguy but I would like to try it on some server grade ssd's or something before I make my decision on it
02:36 mlody well.
02:36 mlody I dont need that high performance actually
02:37 mlody but it is heeeell slow now :D
02:37 Jmainguy it does make setting up raided NFS easy as heck though
02:37 mlody webpage is loading like 30second
02:37 Jmainguy I setup nfs last year without gluster, and it was alot harder
02:37 mlody s
02:37 Jmainguy yeah 30 seconds is not ok
02:37 mlody but strange thing is that:
02:38 mlody http://pastebin.com/qiTc7jej
02:38 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
02:39 mlody httpd is stuck on placing this lock
02:40 Jmainguy yeah that is weird
02:48 roost joined #gluster
02:51 side_control joined #gluster
03:03 kshlm joined #gluster
03:13 hchiramm_ joined #gluster
03:20 theron joined #gluster
03:23 theron joined #gluster
03:28 bharata-rao joined #gluster
03:28 Pupeno joined #gluster
03:29 nishanth joined #gluster
03:39 rafi joined #gluster
03:40 theron joined #gluster
03:41 overclk joined #gluster
03:42 itisravi joined #gluster
03:44 kumar joined #gluster
03:45 nbalacha joined #gluster
03:46 kdhananjay1 joined #gluster
03:51 RameshN joined #gluster
03:52 Gill joined #gluster
03:53 Pupeno joined #gluster
03:56 kanagaraj joined #gluster
04:02 atinmu joined #gluster
04:04 sadbox joined #gluster
04:14 shubhendu joined #gluster
04:17 ndarshan joined #gluster
04:22 ashiq joined #gluster
04:24 dusmant joined #gluster
04:25 anrao joined #gluster
04:30 raghug joined #gluster
04:30 jiffin joined #gluster
04:42 jiffin joined #gluster
04:45 hagarth joined #gluster
04:45 ashiq joined #gluster
04:45 sadbox joined #gluster
04:47 dusmant joined #gluster
04:47 Manikandan joined #gluster
04:57 spandit joined #gluster
04:58 lalatenduM joined #gluster
05:05 anoopcs joined #gluster
05:05 karnan joined #gluster
05:06 hgowtham joined #gluster
05:12 pppp joined #gluster
05:18 glusterbot News from newglusterbugs: [Bug 1215515] [RFE] Quota must distinguish between file overwrite and extend. <https://bugzilla.redhat.com/show_bug.cgi?id=1215515>
05:23 deepakcs joined #gluster
05:26 gem joined #gluster
05:27 kotreshhr joined #gluster
05:28 ppai joined #gluster
05:28 sakshi joined #gluster
05:29 Bardack joined #gluster
05:31 RioS2 joined #gluster
05:41 dusmant joined #gluster
05:46 khanku joined #gluster
05:46 gem joined #gluster
05:48 glusterbot News from newglusterbugs: [Bug 1215550] glusterfsd crashed after directory was removed from the mount point, while self-heal and rebalance were running on the volume <https://bugzilla.redhat.com/show_bug.cgi?id=1215550>
05:49 Bardack joined #gluster
05:50 glusterbot News from resolvedglusterbugs: [Bug 1213802] tiering:volume set command fails for tiered volume after restarting glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=1213802>
05:50 glusterbot News from resolvedglusterbugs: [Bug 1213798] tiering: glusterd doesn't update the tiering info from info file <https://bugzilla.redhat.com/show_bug.cgi?id=1213798>
05:52 overclk joined #gluster
05:52 anrao joined #gluster
05:52 Bhaskarakiran joined #gluster
05:54 RioS2 joined #gluster
05:56 marcoceppi joined #gluster
05:57 khanku joined #gluster
06:00 meghanam joined #gluster
06:01 Bardack joined #gluster
06:01 anrao joined #gluster
06:01 anil joined #gluster
06:02 kdhananjay joined #gluster
06:04 mbukatov joined #gluster
06:07 cholcombe joined #gluster
06:09 gem joined #gluster
06:11 delhage joined #gluster
06:11 Bosse joined #gluster
06:15 atalur joined #gluster
06:16 ktosiek joined #gluster
06:23 anoopcs joined #gluster
06:24 lezo joined #gluster
06:26 Philambdo joined #gluster
06:27 jtux joined #gluster
06:30 anoopcs1 joined #gluster
06:31 anoopcs joined #gluster
06:34 kotreshhr1 joined #gluster
06:38 maveric_amitc_ joined #gluster
06:44 anoopcs joined #gluster
06:47 marcoceppi joined #gluster
06:47 shubhendu joined #gluster
06:48 glusterbot News from newglusterbugs: [Bug 1138841] allow the use of the CIDR format with auth.allow <https://bugzilla.redhat.com/show_bug.cgi?id=1138841>
06:48 Bardack joined #gluster
06:49 ndarshan joined #gluster
06:50 nangthang joined #gluster
06:51 liquidat joined #gluster
06:51 haomaiwang joined #gluster
06:52 marcoceppi joined #gluster
06:52 anoopcs joined #gluster
06:52 marcoceppi joined #gluster
06:52 marcoceppi joined #gluster
06:54 haomai___ joined #gluster
06:54 telmich joined #gluster
06:58 dusmant joined #gluster
06:59 haomaiwang joined #gluster
07:00 Bhaskarakiran joined #gluster
07:04 overclk joined #gluster
07:04 marcoceppi joined #gluster
07:08 aravindavk joined #gluster
07:10 jcastill1 joined #gluster
07:15 jcastillo joined #gluster
07:15 fsimonce joined #gluster
07:16 masterzen joined #gluster
07:18 nishanth joined #gluster
07:18 glusterbot News from newglusterbugs: [Bug 1193474] Package libgfapi-python for its consumers <https://bugzilla.redhat.com/show_bug.cgi?id=1193474>
07:20 glusterbot News from resolvedglusterbugs: [Bug 1210934] qcow2 image creation using qemu-img hits segmentation fault <https://bugzilla.redhat.com/show_bug.cgi?id=1210934>
07:23 sadbox joined #gluster
07:27 shubhendu joined #gluster
07:29 ndarshan joined #gluster
07:30 Apeksha joined #gluster
07:34 Slashman joined #gluster
07:37 Slydder joined #gluster
07:38 Slydder morning all
07:39 aravindavk joined #gluster
07:42 Guest65400 joined #gluster
07:44 Manikandan joined #gluster
07:48 glusterbot News from newglusterbugs: [Bug 1215571] Data Tiering: add tiering set options to volume set help (cluster.tier-demote-frequency and cluster.tier-promote-frequency) <https://bugzilla.redhat.com/show_bug.cgi?id=1215571>
07:56 dusmant joined #gluster
08:01 kotreshhr joined #gluster
08:11 Slydder has anyone here heard of any headway in the fsc option for the fuse client? I am desperately looking for a reason not to dump gluster in favor of ceph.
08:12 fsimonce joined #gluster
08:17 kovshenin joined #gluster
08:31 kovsheni_ joined #gluster
08:32 Norky joined #gluster
08:33 jcastill1 joined #gluster
08:34 Pupeno joined #gluster
08:34 Pupeno joined #gluster
08:38 hagarth joined #gluster
08:38 abyss joined #gluster
08:38 jcastillo joined #gluster
08:41 rjoseph joined #gluster
08:47 al joined #gluster
08:49 dusmant joined #gluster
08:50 smohan joined #gluster
08:53 shpank joined #gluster
08:53 shpank hey guys, can anyone help me troubleshoot locking on a 3 node cluster?
08:55 shpank we have a samba cluster consisting of 3 nodes and want to share the ctdb.lock file via glusterfs
08:56 shpank but the ping_pong test tool behaves in a strange way
08:56 shpank and i'm at my wit's end right now
08:56 RaSTar shpank: Using a glusterfs volume to host ctdb lock file should work
08:56 RaSTar shpank: what is the command you used for ping pong test?
08:57 shpank ping_pong -rw lock-test 10
08:57 shpank i only read that the number should be greater than the node count
08:57 shpank lock-test file is of course located on the shared gluster volume
08:58 poornimag joined #gluster
08:59 shpank RaSTar: locks decrease as i start ping_pong on the other nodes, but the data_increment value doesn't increase
09:00 RaSTar shpank: what is the version of gluster that you are using?
09:01 shpank it's 3.5.2-1 from debian's wheezy-backports
09:01 RaSTar yes, that is not the right Behaviour
09:01 chirino joined #gluster
09:01 RaSTar Lets change a configuration for ctdb vol and try the ping pong test again
09:02 RaSTar gluster volume set <CTDB-VOLNAME> storage.batch-fsync-delay-usec 0
09:02 johnnytran joined #gluster
09:03 shpank RaSTar: other than about 10 more locks/sec, nothing changed
09:03 shpank but
09:03 shpank i found another thing
09:03 shpank i mounted the file systems on the nodes like this: localhost:/shared /var/shared glusterfs defaults,_netdev 0      0
09:04 shpank is this right? or shouldn't the nodes mount their gluster shares via localhost?
09:06 RaSTar shpank: the mount parameters are right
09:07 RaSTar Please try one more configuration change, if this does work, we will look at logs
09:07 shpank okay
09:07 RaSTar gluster volume set <CTDB-VOLNAME> stat-prefetch off
09:08 shpank RaSTar: same behaviour
09:10 RaSTar ok, we don't this see behaviour on master and I do remember us fixing this bug.
09:10 hagarth joined #gluster
09:10 RaSTar may be we missed a backport
09:11 RaSTar Can you please file a bug for this..
09:11 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
09:11 RaSTar and as long as lock test in ping pong works for you, it should be ok to use the volume to host ctdb lock file
09:12 shpank RaSTar: we are running ctdb in this state in production
09:12 shpank but it's far from optimal
09:13 shpank we've had several instances of data loss and some users have troubles with connection resets, especially when using adobe software
09:13 _shaps_ joined #gluster
09:13 shpank all in all it's really strange and we are just trying to eliminate every other possibility
09:13 shpank and the samba guys are pretty insistent on their ctdb ping_pong test running smoothly before letting you ask stupid questions :)
09:14 RaSTar Yes, I agree.
09:14 RaSTar Even we encountered these things during testing..
09:14 RaSTar things which have helped largely are
09:14 RaSTar 1. started using ctdb 2.5 and onwards
09:15 RaSTar has lot of bug fixes
09:15 RaSTar 2. little bit more configuration changes in gluster vol for ctdb, like changing the ping timeout to 10 secs
09:15 lalatenduM_ joined #gluster
09:15 shpank we already upgraded ctdb and stability has improved by a lot
09:16 shpank but it's still far away from solid
09:16 RaSTar 3. disabling all performance xlators for ctdb volume
09:16 _shaps_ left #gluster
09:17 RaSTar shpank: this is a good case for us to improve it..It would be very helpful if you could file a bug with your config details.
09:17 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
09:18 shpank okay i'll try
09:18 shpank thanks a lot for your time!
09:18 RaSTar Mostly it would be tweaks in configuration that can get us to good state
09:18 o5k joined #gluster
09:18 RaSTar Thank you! It is always good to know user experiences.
09:19 glusterbot News from newglusterbugs: [Bug 1215596] "case sensitive = no" is not honored when "preserve case = yes" is present in smb.conf <https://bugzilla.redhat.com/show_bug.cgi?id=1215596>
09:21 fsimonce joined #gluster
09:24 soumya joined #gluster
09:24 telmich joined #gluster
09:37 ctria joined #gluster
09:44 mdavidson joined #gluster
09:44 Slashman joined #gluster
09:46 marcoceppi joined #gluster
09:46 marcoceppi joined #gluster
09:48 ndarshan joined #gluster
09:50 shubhendu joined #gluster
09:52 kotreshhr joined #gluster
09:53 cholcombe joined #gluster
09:56 rafi1 joined #gluster
09:57 soumya joined #gluster
09:58 dusmant joined #gluster
10:00 DV joined #gluster
10:05 aravindavk joined #gluster
10:12 poornimag joined #gluster
10:12 kotreshhr1 joined #gluster
10:16 SOLDIERz joined #gluster
10:19 fsimonce joined #gluster
10:28 ira joined #gluster
10:31 Anjana joined #gluster
10:51 soumya joined #gluster
10:52 meghanam_ joined #gluster
10:53 firemanxbr joined #gluster
11:00 kovshenin joined #gluster
11:02 rafi joined #gluster
11:11 marcoceppi joined #gluster
11:11 ashiq joined #gluster
11:14 marcoceppi joined #gluster
11:14 marcoceppi joined #gluster
11:15 shubhendu joined #gluster
11:18 poornimag joined #gluster
11:22 dusmant joined #gluster
11:23 LebedevRI joined #gluster
11:29 SOLDIERz joined #gluster
11:29 rwheeler joined #gluster
11:33 marcoceppi joined #gluster
11:33 marcoceppi joined #gluster
11:40 jiffin joined #gluster
11:43 SOLDIERz joined #gluster
11:44 shubhendu joined #gluster
11:48 kotreshhr joined #gluster
11:53 nishanth joined #gluster
11:57 nbalacha joined #gluster
11:57 RayTrace_ joined #gluster
11:57 _nixpanic joined #gluster
11:57 _nixpanic joined #gluster
11:58 rjoseph joined #gluster
12:00 [Enrico] joined #gluster
12:00 rafi1 joined #gluster
12:02 Gill_ joined #gluster
12:04 ndarshan joined #gluster
12:05 dusmant joined #gluster
12:07 itisravi_ joined #gluster
12:08 shubhendu joined #gluster
12:08 anoopcs joined #gluster
12:11 shpank RaSTar: https://bugzilla.redhat.com/show_bug.cgi?id=1215664
12:11 glusterbot Bug 1215664: unspecified, unspecified, ---, bugs, NEW , ctdb ping_pong fails on replicated gluster volume
12:11 marcoceppi joined #gluster
12:12 anrao joined #gluster
12:12 plarsen joined #gluster
12:13 haomaiwa_ joined #gluster
12:16 Pupeno joined #gluster
12:16 Pupeno joined #gluster
12:17 jiku joined #gluster
12:18 itisravi joined #gluster
12:19 plarsen joined #gluster
12:19 B21956 joined #gluster
12:19 glusterbot News from newglusterbugs: [Bug 1215664] ctdb ping_pong fails on replicated gluster volume <https://bugzilla.redhat.com/show_bug.cgi?id=1215664>
12:21 RameshN joined #gluster
12:24 harish joined #gluster
12:24 Sjors joined #gluster
12:27 Sjors joined #gluster
12:28 SOLDIERz joined #gluster
12:31 raghug joined #gluster
12:32 rafi joined #gluster
12:35 rafi1 joined #gluster
12:38 B21956 left #gluster
12:39 B21956 joined #gluster
12:41 dblack joined #gluster
12:42 bene2 joined #gluster
12:44 SOLDIERz joined #gluster
12:50 glusterbot News from newglusterbugs: [Bug 1215668] [geo-rep + tiering]: georep fails to create session on tiered volume <https://bugzilla.redhat.com/show_bug.cgi?id=1215668>
12:50 k-ma joined #gluster
12:51 kotreshhr left #gluster
12:56 Anjana joined #gluster
12:57 RaSTar shpank: thanks!
12:57 shpank i hope it helps...
12:58 raghug joined #gluster
12:59 kenansulayman joined #gluster
12:59 jiffin1 joined #gluster
12:59 atalur joined #gluster
13:11 rjoseph joined #gluster
13:20 dusmant joined #gluster
13:21 nishanth joined #gluster
13:22 theron joined #gluster
13:28 bennyturns joined #gluster
13:29 Manikandan joined #gluster
13:29 Manikandan_ joined #gluster
13:30 marcoceppi joined #gluster
13:30 georgeh-LT2 joined #gluster
13:38 B21956 joined #gluster
13:40 hamiller joined #gluster
13:43 anrao joined #gluster
13:44 Prilly joined #gluster
13:53 lalatenduM joined #gluster
13:53 AGTT joined #gluster
13:56 jmarley joined #gluster
13:58 kovshenin joined #gluster
14:02 kovsheni_ joined #gluster
14:02 karnan joined #gluster
14:02 chirino joined #gluster
14:03 wushudoin joined #gluster
14:04 AGTT Hi. I curious if ACLs are supported in Gluster. The backend volumes work; I mounted them with -o acl, as well as the gluster vol, but when I tried to setfacl, it said "Operation not permitted". It's also strange that if I try to " -o remount,acl <gluster vol>", it says "Invalid option remount". Thanks. :)
14:04 o5k joined #gluster
14:05 AGTT (...oops, I meant "I *am* curious")
14:06 AGTT I have also found this, but that didn't change anything: https://bugzilla.redhat.com/show_bug.cgi?id=988943
14:06 glusterbot Bug 988943: urgent, unspecified, ---, bugs, NEW , ACL doesn't work with FUSE mounted GlusterFS
14:08 kovshenin joined #gluster
14:09 AGTT Sorry, I forgot to mention these: the gluster volume is mounted via the native -t glusterfs option, not via NFS; I am using Arch x64; Gluster version is 3.2.6(-1).
14:10 AGTT Sorry, 3.6.2
14:11 AGTT Thanks!
14:11 kovshen__ joined #gluster
14:14 kovshenin joined #gluster
14:16 hagarth joined #gluster
14:30 jackdpeterson joined #gluster
14:32 kshlm joined #gluster
14:32 jmarley joined #gluster
14:39 fsimonce joined #gluster
14:44 dblack joined #gluster
14:46 nishanth joined #gluster
14:48 coredump joined #gluster
14:49 Prilly joined #gluster
14:49 JoeJulian AGTT: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_ACLs.md
14:51 roost joined #gluster
14:52 jobewan joined #gluster
14:54 fsimonce joined #gluster
14:58 jiffin1 y
15:02 AGTT Thanks for replying. Well, after reading the 'Activating Support' part, it seems that that is what I did: I mounted the backend with -o acl (with fstab), and the gluster volume with -o acl.
15:02 AGTT gluster vol with fstab too.
15:03 JoeJulian I have to admit, I hadn't read it myself.
15:03 AGTT that's ok
15:03 JoeJulian I've seen lots of people using ACL, and haven't heard of anybody having an issue getting it to work.
15:03 JoeJulian check the client and/or brick logs?
15:04 JoeJulian Oh, and FUSE doesn't support remount.
15:04 AGTT also, I think that this example is a mistake: "# mount -t glusterfs -o acl 198.192.198.234:glustervolume /mnt/gluster" -- shouldn't it be "198.192.198.234:/glustervolume", with a slash?
15:04 JoeJulian Not necessary, no.
15:04 AGTT ok -- didn't know that
15:05 AGTT about fuse
15:06 AGTT well, I do get an error if I try mounting without a slash: ERROR: Server name/volume name unspecified cannot proceed further..
15:07 kovsheni_ joined #gluster
15:11 JoeJulian in 3.6.2?
15:11 AGTT yes
15:14 bennyturns joined #gluster
15:14 AGTT I have many lines like this (related to acls, which grep -R mostly found): /var/log/glusterfs/mnt-data.log:85611:[2015-02-18 03:25:43.381857] I [dict.c:370:dict_get] (--> /usr/lib/libglusterfs.so.0(_gf_log_callingfn+0x147)[0x7f01ac97b357] (--> /usr/lib/libglusterfs.so.0(dict_get+0x89)[0x7f01ac974919] (--> /usr/lib/glusterfs//xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x232)[0x7f01a6303ef2] (--> /usr/lib/glusterfs//xlator/debug/io-stats
15:14 glusterbot AGTT: ('s karma is now -66
15:14 glusterbot AGTT: ('s karma is now -67
15:14 glusterbot AGTT: ('s karma is now -68
15:14 glusterbot AGTT: ('s karma is now -69
15:15 AGTT sorry if the line is too long
15:15 AGTT and "/var/log/glusterfs/mnt-data.log:149:[2015-02-17 16:21:10.729243] I [graph.c:269:gf_add_cmdline_options] 0-data-md-cache: adding option 'cache-posix-acl' for volume 'data-md-cache' with value 'true'"
15:16 kbyrne joined #gluster
15:17 JoeJulian makes no sense. The sed parse that triggers the "Server name/volume name unspecified" error is part of the mount.glusterfs bash script (/sbin/mount.glusterfs). You can see the parsing at line 585. If there's a slash, volume_id will have a leading slash. If it does not, it will not. sed doesn't care.
15:19 fsimonce joined #gluster
15:19 JoeJulian you're looking at logs from February?
15:20 ckotil joined #gluster
15:20 AGTT oops did't look at that... sorry
15:20 JoeJulian I would recommend truncating the log (or mounting in a new mountpoint just to test), mount the volume, create the error, upload that log to fpaste.org and share the link generated.
15:21 AGTT ok
15:28 pppp joined #gluster
15:30 AGTT ok, it seems to work properly on the test mountpoint
15:31 AGTT fstab had the acl option there, so wouldn't that be enough?
15:42 lalatenduM joined #gluster
15:58 Pupeno joined #gluster
16:11 soumya joined #gluster
16:13 kdhananjay joined #gluster
16:22 fsimonce joined #gluster
16:30 Manikandan joined #gluster
16:35 Manikandan joined #gluster
16:37 soumya joined #gluster
16:39 rafi joined #gluster
16:42 cholcombe joined #gluster
16:47 fsimonce joined #gluster
16:49 Gill joined #gluster
16:53 deepakcs joined #gluster
16:55 ekuric joined #gluster
17:05 poornimag joined #gluster
17:07 fsimonce joined #gluster
17:16 fsimonce joined #gluster
17:17 kovshenin joined #gluster
17:20 Rapture joined #gluster
17:27 kovsheni_ joined #gluster
17:28 wushudoin joined #gluster
17:43 kovshenin joined #gluster
17:46 kovsheni_ joined #gluster
17:49 kovshenin joined #gluster
17:51 glusterbot News from newglusterbugs: [Bug 1215787] [HC] qcow2 image creation using qemu-img hits segmentation fault <https://bugzilla.redhat.com/show_bug.cgi?id=1215787>
17:57 fsimonce joined #gluster
18:03 shaunm_ joined #gluster
18:10 Philambdo1 joined #gluster
18:13 Prilly joined #gluster
18:17 Prilly joined #gluster
18:25 fsimonce joined #gluster
18:39 lalatenduM joined #gluster
18:48 jiku joined #gluster
19:04 chirino joined #gluster
19:05 atalur joined #gluster
19:07 ttkg joined #gluster
19:11 jiku joined #gluster
19:14 Pupeno_ joined #gluster
19:22 Pupeno joined #gluster
19:32 ktosiek joined #gluster
19:36 o5k_ joined #gluster
19:43 m0zes joined #gluster
19:46 shaunm_ joined #gluster
19:59 fsimonce joined #gluster
20:01 kovsheni_ joined #gluster
20:02 Pupeno_ joined #gluster
20:56 badone__ joined #gluster
20:56 gnudna joined #gluster
21:03 jbrooks joined #gluster
21:07 bene2 joined #gluster
21:08 Pupeno joined #gluster
21:15 RayTrace_ joined #gluster
21:18 halfinhalfout joined #gluster
21:24 fsimonce` joined #gluster
21:25 fsimonce joined #gluster
21:31 foster joined #gluster
21:33 Guest93135 joined #gluster
21:39 o5k joined #gluster
21:43 gnudna left #gluster
22:14 fsimonce joined #gluster
22:25 Pupeno_ joined #gluster
22:26 gildub joined #gluster
22:33 fsimonce joined #gluster
23:02 wkf joined #gluster
23:14 Pupeno joined #gluster
23:17 rotbeard joined #gluster
23:49 Rapture does open source gluster have any sort of roadmap available for the foreseeable future?
23:53 chirino joined #gluster
23:53 AGTT joined #gluster
23:56 JoeJulian Rapture: There are planning pages here: http://www.gluster.org/community/documentation/index.php/Main_Page
23:58 Rapture Thank @JoeJulian: I love glusterFS!
23:58 JoeJulian :D

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary