Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 chirino joined #gluster
00:09 akay has anyone come across the problem of a rebalance causing strange ------T permissions and turning folders into files?
00:16 Ark joined #gluster
00:23 XpineX_ joined #gluster
00:25 theron joined #gluster
00:30 DV joined #gluster
00:46 gdubreui joined #gluster
00:49 yinyin_ joined #gluster
00:56 XpineX__ joined #gluster
00:56 diegows joined #gluster
01:02 chirino_m joined #gluster
01:17 primechuck joined #gluster
01:24 XpineX_ joined #gluster
01:27 XpineX__ joined #gluster
01:28 mkzero joined #gluster
01:31 gtobon joined #gluster
01:33 chirino joined #gluster
01:35 XpineX_ joined #gluster
01:43 XpineX__ joined #gluster
01:56 chirino_m joined #gluster
01:59 akay I swear I must be in a timezone that I'm asking questions when everyone is asleep :)
02:00 XpineX__ joined #gluster
02:02 bala joined #gluster
02:04 verdurin joined #gluster
02:04 rotbeard joined #gluster
02:09 XpineX_ joined #gluster
02:22 gmcwhistler joined #gluster
02:32 haomaiwang joined #gluster
02:33 harish_ joined #gluster
02:35 chirino joined #gluster
02:36 ceiphas_ joined #gluster
02:41 m0zes joined #gluster
02:45 theron_ joined #gluster
02:47 theron__ joined #gluster
02:57 bala joined #gluster
03:02 sjm joined #gluster
03:04 chirino_m joined #gluster
03:05 theron joined #gluster
03:06 saurabh joined #gluster
03:10 theron_ joined #gluster
03:13 bharata-rao joined #gluster
03:18 theron joined #gluster
03:21 rastar joined #gluster
03:27 edong23 joined #gluster
03:33 spajus joined #gluster
03:34 haomaiwang joined #gluster
03:42 RameshN joined #gluster
03:44 spajus joined #gluster
03:45 nishanth joined #gluster
03:46 JoeJulian akay: I'm sorry you chose the wrong timezone in which to live and work. ;) ... which version is that?
03:47 JoeJulian kmai007: Yes, any change you make to the glusterd state files (/var/lib/glusterd) must be synchronized to all the servers in the pool.
03:50 JoeJulian rps: Not really. I prefer the simplicity of not having stripes of data strewn across multiple disks. For worst-case disaster recovery I prefer to have a drive I can ship to a clean room and have whole files extracted. Some feel that there's a cost benefit to using raid under distribute without replica redundancy. You'll have to determine which suits your needs and meets your SLA/OLA if applicable.
03:51 kshlm joined #gluster
03:52 itisravi joined #gluster
03:52 mmorsi joined #gluster
03:53 bala joined #gluster
03:59 MugginsM joined #gluster
04:05 chirino joined #gluster
04:10 spandit joined #gluster
04:10 ramteid joined #gluster
04:16 shubhendu joined #gluster
04:17 ppai joined #gluster
04:23 rps joined #gluster
04:27 ndarshan joined #gluster
04:29 Ark joined #gluster
04:35 hchiramm_ joined #gluster
04:38 meghanam joined #gluster
04:40 kdhananjay joined #gluster
04:44 dusmantkp_ joined #gluster
04:44 kanagaraj joined #gluster
04:44 bala joined #gluster
04:57 vpshastry joined #gluster
05:00 [o__o] joined #gluster
05:03 psharma joined #gluster
05:05 ravindran1 joined #gluster
05:06 hchiramm_ joined #gluster
05:10 ppai joined #gluster
05:12 yinyin_ joined #gluster
05:19 prasanthp joined #gluster
05:20 aravindavk joined #gluster
05:22 Humble @krishnan_p++ thanks
05:24 kumar joined #gluster
05:24 RameshN joined #gluster
05:31 sahina joined #gluster
05:34 ravindran1 left #gluster
05:39 yinyin_ joined #gluster
05:41 stickyboy joined #gluster
05:41 stickyboy joined #gluster
05:44 hagarth joined #gluster
05:46 raghu joined #gluster
05:52 dusmantkp_ joined #gluster
05:53 dusmant joined #gluster
05:57 lalatenduM joined #gluster
06:03 an joined #gluster
06:14 rtalur_ joined #gluster
06:15 rgustafs joined #gluster
06:16 bala joined #gluster
06:16 kdhananjay joined #gluster
06:20 davinder4 joined #gluster
06:22 spajus joined #gluster
06:22 hagarth joined #gluster
06:23 dusmant joined #gluster
06:23 nshaikh joined #gluster
06:24 davinder5 joined #gluster
06:32 rahulcs joined #gluster
06:38 davinder5 joined #gluster
06:42 ricky-ticky joined #gluster
06:50 kdhananjay joined #gluster
06:51 davinder5 joined #gluster
06:51 vimal joined #gluster
06:55 harish joined #gluster
06:59 ramteid joined #gluster
07:01 meghanam joined #gluster
07:05 ctria joined #gluster
07:08 keytab joined #gluster
07:09 Pupeno joined #gluster
07:11 lalatenduM joined #gluster
07:11 karimb joined #gluster
07:13 shubhendu_ joined #gluster
07:14 Philambdo joined #gluster
07:16 rastar joined #gluster
07:16 _Bryan_ joined #gluster
07:30 fishdaemon joined #gluster
07:32 davinder6 joined #gluster
07:35 ktosiek joined #gluster
07:35 ProT-0-TypE joined #gluster
07:38 fsimonce joined #gluster
07:42 haomaiwa_ joined #gluster
07:42 davinder6 joined #gluster
07:50 glusterbot New news from newglusterbugs: [Bug 1091898] [barrier] "cp -a" operation hangs on NFS mount, while barrier is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091898>
07:50 rtalur_ joined #gluster
08:12 andreask joined #gluster
08:13 liquidat joined #gluster
08:13 rgustafs_ joined #gluster
08:14 DV joined #gluster
08:17 meghanam joined #gluster
08:25 pasqd joined #gluster
08:25 pasqd hi everyone
08:25 pasqd i'm new;-)
08:35 morfair joined #gluster
08:38 pasqd joined #gluster
08:39 Thilam|Work joined #gluster
08:41 hagarth hi pasqd, welcome!
08:41 Thilam joined #gluster
08:41 Thilam hi there
08:42 kumar joined #gluster
08:44 an joined #gluster
08:46 morfair Hi all! Sorry for my English, i'm Russia. Tell me please, What different in Teplicated and Distributed Replicated volumes? I want high-availability (failover) and high read perfomance.
08:46 morfair Teplicated = Replicated
08:46 spajus joined #gluster
08:47 Thilam replicated = files are writes in severals copies in bricks
08:48 Thilam it ensure reliability of the cluster
08:48 Thilam if one brick fail for example
08:48 Thilam distributed = tiles are stored one time
08:48 Thilam not tiles but files
08:50 morfair i see https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_setting_volumes.md
08:50 glusterbot Title: glusterfs/doc/admin-guide/en-US/m​arkdown/admin_setting_volumes.md at master · gluster/glusterfs · GitHub (at github.com)
08:50 morfair Creating Distributed Replicated Volumes
08:50 morfair =)
08:51 Thilam look at this
08:51 Thilam https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_setting_volumes.md
08:51 glusterbot Title: glusterfs/doc/admin-guide/en-US/m​arkdown/admin_setting_volumes.md at master · gluster/glusterfs · GitHub (at github.com)
08:51 Thilam you can do both
08:51 morfair This man have Creating Replicated Volumes and Creating Distributed Replicated Volumes. I do not understand the difference
08:51 Thilam distributed just means volume is spread over nodes
08:52 ngoswami joined #gluster
08:52 Slashman joined #gluster
08:53 Thilam I think distributed means over several servers
08:53 Thilam servers <> bricks
08:53 morfair But at Distributed Replicated image two node
08:54 morfair Every node have two bricks
08:54 morfair https://raw.githubusercontent.com/gluster​/glusterfs/master/doc/admin-guide/en-US/i​mages/Distributed_Replicated_Volume.png
08:55 Thilam look at the schema
08:56 olisch joined #gluster
08:56 morfair both https://raw.githubusercontent.com/glu​ster/glusterfs/master/doc/admin-guide​/en-US/images/Replicated_Volume.png and https://raw.githubusercontent.com/gluster​/glusterfs/master/doc/admin-guide/en-US/i​mages/Distributed_Replicated_Volume.png have two node (server).
08:57 morfair What better?
08:57 morfair What diffecrence?
08:57 olisch the only difference is, that you split your data between more servers
08:58 olisch if you need more servers for more bandwidth and more storage space you chosse the distributed replicated
08:58 olisch otherwise you go with replicated
08:58 olisch you can expand your replicated anytime to distributed replicated if needed
08:59 morfair olisch, thanks
09:00 Thilam in the schema this is not clear, it seams brick scaled, but not the servers
09:00 Thilam I now understand morfair question :p
09:00 morfair So, at this monet i have two server. I want reliability and high read performance. In future i want scale with reliability and high read performance.What should I do now?
09:01 morfair Replicated or  Distributed Replicated with two bricks at one node?
09:02 morfair Replicate (not distribute) increase read performance?
09:03 olisch no, it wont
09:03 olisch but distributed wont either, if you are just reading one file
09:03 andreask1 joined #gluster
09:03 olisch if you have have multiple clients accessing your volume, then distributed increases performance
09:04 olisch because the network load is distributed among your servers
09:04 morfair replicate increase performance for read random files different users?
09:05 pasqd i was just wondering, if two ssd bricks can replicate to two sata bricks over 1gb, any ideas?
09:06 olisch morfair: its probably the same in a 2 server setup, each file can be accessed from each server
09:06 olisch pasqd: sure, thats no problemm
09:08 pasqd olisch: i can see some potential problems, ssd is much faster and can take much more writing than sata
09:08 pasqd so
09:08 morfair olisch, so, if a have four files (two files on each node), and two users read two files from different  servers, files will be read from different servers or not?
09:09 olisch pasqd: thats true, but i guess your network will be the bottleneck, or your have to go for 10gbit
09:10 olisch 120mb/s should be no problem for sata hdd as well
09:11 olisch morfair: afaik it will be kind of random, from where the client reads the file
09:11 morfair olisch, thanks u
09:11 olisch maybe both clients are reading from same source, maybe each is reading from one server
09:11 olisch its kind of round robin
09:14 morfair rr - also good..
09:15 pasqd olisch: thx
09:16 Thilam on my side I made a test with directory quota and it seams once a file is created, it can grows larger than the quota limit
09:16 morfair if i do like https://raw.githubusercontent.com/gluster​/glusterfs/master/doc/admin-guide/en-US/i​mages/Distributed_Replicated_Volume.png - two servers, two bricks at server, reliability and read perfomance will be good?
09:18 morfair Will if i do like this image https://raw.githubusercontent.com/gluster​/glusterfs/master/doc/admin-guide/en-US/i​mages/Distributed_Replicated_Volume.png - two servers, two bricks at server, reliability and read perfomance will be good?
09:21 olisch from my expierence a distributed replicated with just two servers makes no sense, the graphic seems a little bit confusing to me
09:21 olisch you would need at least 4 servers for a real distributed replicated, otherwise its just the same as a replicated
09:22 Thilam why four ?
09:23 Thilam and not three ?
09:23 olisch you need a multiple of the replica count
09:23 olisch if you have a replica of 2, you need 2, 4 or 6 ....
09:24 Thilam k
09:25 Thilam olisch, do you have an idea on my previous question?
09:25 Thilam about quota
09:25 olisch i dont have experience with quota at current glusterfs version
09:25 Thilam I create a folder, assign quota on it (10MB) make a dd in this folder with no limit
09:25 olisch i used quota last time with glusterfs 3.2.6
09:25 Thilam and my dd never crashes
09:25 olisch there i had the same problem
09:25 Thilam k
09:26 olisch i worked around by using xfs project quota on the data directory for the brick
09:29 Thilam ok but I would not be consistent if the folder content is spread over several bricks
09:29 morfair olisch, Do wrong picture in the official documentation? =)
09:31 olisch its not really wrong, but it makes no sense to use a 2 node setup as distributed replicated
09:33 olisch you could wait for some gluster devs/experts perhaps they have another opinion and know some more backgrounds, which could give a 2 node distributed/replicated some sense
09:33 olisch but i dont see it
09:34 olisch thilam: yes, thats a problem
09:34 olisch in my case its been just a 2 node replicated cluster, so its been no problem
09:47 morfair olisch, thanks again
09:50 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075611>
09:51 coredumb i have these test servers running a test volume with like 2G of test datas in it
09:52 coredumb 4 bricks in D+R on 2 nodes
09:52 coredumb and it seems as glusterfs is leaking and taking up all the memory and swap
09:52 coredumb it's on glusterfs 3.5
09:53 coredumb 12.5g 6.9g 1824 S  0.0 90.2   1:21.90 glusterfs
09:53 coredumb on a 8G machine
09:55 coredumb mmmh seems that it didn't like the bricks on ZFS
09:57 ira joined #gluster
09:57 coredumb ok don't mind me then :D
09:58 olisch coredump: yes, zfs might be pretty much ram consuming
09:58 ira joined #gluster
09:58 an joined #gluster
09:58 olisch do you use deduplication?
09:58 olisch furthermore caches of zfs might be a reason
09:59 olisch did you disable nfs for the gluster volume? that might save some memory
10:00 olisch ooh, i see, its the glusterfs process using all the ram … thats strange
10:00 olisch sorry, no idea
10:02 davinder6 joined #gluster
10:08 ppai joined #gluster
10:17 davinder6 joined #gluster
10:22 qdk_ joined #gluster
10:45 rahulcs joined #gluster
10:47 ngoswami joined #gluster
10:52 meghanam joined #gluster
11:14 glusterbot New news from resolvedglusterbugs: [Bug 1086764] Add documentation for the Feature: Duplicate Request Cache (DRC) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086764>
11:27 rahulcs joined #gluster
11:29 rahulcs_ joined #gluster
11:37 ProT-0-TypE joined #gluster
11:37 rahulcs joined #gluster
11:41 hagarth joined #gluster
11:49 haomaiwa_ joined #gluster
11:58 Ark joined #gluster
12:02 rahulcs joined #gluster
12:06 itisravi_ joined #gluster
12:07 edward1 joined #gluster
12:11 davinder6 joined #gluster
12:11 theron joined #gluster
12:12 pdrakeweb joined #gluster
12:14 chirino_m joined #gluster
12:15 siel joined #gluster
12:17 rahulcs joined #gluster
12:20 mjsmith2 joined #gluster
12:42 B21956 joined #gluster
12:48 sahina joined #gluster
12:49 theron joined #gluster
12:54 bala joined #gluster
12:56 haomaiwa_ joined #gluster
12:57 coredump joined #gluster
13:04 coredump joined #gluster
13:16 chirino joined #gluster
13:17 hagarth joined #gluster
13:18 sjm joined #gluster
13:18 jcsp1 left #gluster
13:28 mjsmith2 joined #gluster
13:29 tdasilva joined #gluster
13:33 calum_ joined #gluster
13:47 sjm left #gluster
13:50 davinder6 joined #gluster
13:50 coreping joined #gluster
13:58 bennyturns joined #gluster
13:59 kkeithley joined #gluster
14:06 wushudoin joined #gluster
14:09 haomaiwang joined #gluster
14:25 gmcwhistler joined #gluster
14:31 LoudNoises joined #gluster
14:31 sprachgenerator joined #gluster
14:46 theron joined #gluster
14:49 akrherz joined #gluster
14:50 Pavid7 joined #gluster
14:53 sroy_ joined #gluster
14:53 lmickh joined #gluster
14:57 akrherz hi, I mount a gluster volume to web farm nodes with fuse, I can't seem to get apache to be able to write to a folder on this mount, is there a trick involved?  I always get 'read-only filesystem'
15:00 theron_ joined #gluster
15:09 meghanam joined #gluster
15:11 john3213 joined #gluster
15:12 jag3773 joined #gluster
15:13 vimal joined #gluster
15:13 jobewan joined #gluster
15:15 glusterbot New news from resolvedglusterbugs: [Bug 1015990] Implementation of command to get the count of entries to be healed for each brick <https://bugzilla.redhat.co​m/show_bug.cgi?id=1015990>
15:16 john3213 left #gluster
15:18 daMaestro joined #gluster
15:19 akrherz well, I mount with NFS and it works, perhaps I just go that route
15:20 semiosis akrherz: if you dont use mount option ro then the fs should be read-write (by default)
15:21 semiosis akrherz: then it's up to your directory & file permissions
15:21 semiosis akrherz: maybe selinux can get in the way, have you disabled that?
15:21 semiosis there's really no trick
15:21 akrherz semiosis: thanks for the response, I can write to the fuse mount as other users, the permissions are ok
15:21 semiosis maybe selinux is blocking apache
15:21 semiosis just guessing
15:22 akrherz semiosis: selinux is off, using gluster 3.5 and RHEL6.5, shrug, NFS is working, will just use it
15:22 semiosis ok
15:25 ctria joined #gluster
15:28 haomaiwang joined #gluster
15:34 tdasilva left #gluster
15:40 diegows joined #gluster
15:46 sjm joined #gluster
15:47 chirino_m joined #gluster
15:58 tdasilva joined #gluster
16:00 jbd1 joined #gluster
16:10 tg2 joined #gluster
16:16 sijis silly question..
16:17 sijis all updates to the brick must be done on the client side.. so if gfs11 /opt/brick1/files.txt ... i should update on the server1:/brick1 and not directly on the gfs11 server, correct?
16:18 jbd1 sijis: any updates to files should be done from a client mount
16:19 sijis jbd1: that's likely to cause some weird things, correct? some folks are seeing some split_brain messages
16:19 sijis and in talking they are updating on the gfs11 directly.. not on client
16:20 sjm left #gluster
16:21 theron joined #gluster
16:24 Mo__ joined #gluster
16:30 Slashman hello, I'm looking for the option to bind all glusterfs services to a single IP and/or interface, I cannot find the option in the documentation, does someone knows if it exists?
16:30 ndevos I dont think it exists, or at least I've never seen it, Slashman
16:30 bala joined #gluster
16:31 Slashman ndevos: toobad, thanks for the confirmation
16:32 sprachgenerator joined #gluster
16:39 marcoceppi joined #gluster
16:39 marcoceppi joined #gluster
16:44 akrherz left #gluster
16:46 SFLimey joined #gluster
16:48 semiosis johnmark: ping
16:48 glusterbot semiosis: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
16:49 gmcwhist_ joined #gluster
16:52 zaitcev joined #gluster
17:04 mjsmith2 joined #gluster
17:05 jag3773 joined #gluster
17:20 mjsmith2 joined #gluster
17:20 gmcwhistler joined #gluster
17:21 gmcwhist_ joined #gluster
17:24 jcsp1 joined #gluster
17:28 sijis i'm trying to find the documentatoin that says to only use the client mount to update/remove/delete files
17:29 sijis not that i'm don't believe. i just need to pass it on to my team
17:30 milu Any experience on gluster using iscsi tgtd with gfapi implemented?
17:32 theron joined #gluster
17:37 johnmark semiosis: /me runs
17:38 prasanthp joined #gluster
17:40 rotbeard joined #gluster
17:45 milu how do you did it?
17:45 milu creating a lvm device?
17:45 milu or over a file?
18:02 ctria joined #gluster
18:15 edward1 joined #gluster
18:18 ktosiek joined #gluster
18:21 systemonkey2 joined #gluster
18:25 jbd1 sijis: http://gluster.org/community/documenta​tion/index.php/GlusterFS_Technical_FAQ
18:25 glusterbot Title: GlusterFS Technical FAQ - GlusterDocumentation (at gluster.org)
18:26 jbd1 sijis: under "Can I directory access the data on the underlying storage volumes?" you see the note: "Note that this is not tested as part of gluster's release cycle and not recommended for production use. "
18:26 jbd1 s/directory/directly/
18:26 glusterbot What jbd1 meant to say was: sijis: under "Can I directly access the data on the underlying storage volumes?" you see the note: "Note that this is not tested as part of gluster's release cycle and not recommended for production use. "
18:27 jbd1 sijis: so there you have it: directly accessing the data on the volumes is possible, but not recommended for production use.
18:37 tdasilva joined #gluster
18:44 sjm joined #gluster
18:44 sijis jbd1: coolio. fair enough. i just need it to pass on to the team
19:06 mshadle joined #gluster
19:11 plarsen joined #gluster
19:18 nueces joined #gluster
19:21 cvdyoung left #gluster
19:27 Pupeno joined #gluster
20:02 tdasilva left #gluster
20:03 ira joined #gluster
20:10 SFLimey_ joined #gluster
20:16 akay joined #gluster
20:24 chirino joined #gluster
20:33 impl joined #gluster
20:38 kron4eg joined #gluster
20:44 kron4eg left #gluster
20:45 jag3773 joined #gluster
20:48 zwevans joined #gluster
20:50 bennyturns joined #gluster
20:58 an joined #gluster
20:59 AaronGr left #gluster
21:13 sputnik13 joined #gluster
21:23 B21956 joined #gluster
21:25 ghenry joined #gluster
21:27 bennyturns joined #gluster
21:28 badone joined #gluster
21:40 kmai007 joined #gluster
21:44 hagarth joined #gluster
21:51 impl hi guys! i'm trying to get a sort of simple glusterfs setup running on AWS, but running into some weird issues... especially with nfs client. current setup is two availability zones (same region), two m1.small replica instances with non-IOPS-provisioned EBS storage running xfs as the bricks, both are debian with glusterfs 3.5.0. two things i've noticed are that stat()s seem to be really slow when using the
21:51 impl FUSE client, and when using nfs it will intermittently hang (in the kernel i think, since strace shows absolutely no activity) for about a full minute before ANY command will even start to run (ls, cd, umount, etc.)... anyone seen anything like this before? am i right in thinking i basically have to use provisioned IOPS EBS volumes?
21:54 JoeJulian If you look at the reason for the lag when doing lookups over fuse, it makes some sense. NFS uses FSCache in the kernel. When that cache expires it has to refresh it. Overall, nfs is less performant than fuse. If you want files cached for web serving, there are much more efficient tools than FSCache.
21:54 chirino_m joined #gluster
21:55 impl i'm actually trying to do a distributed home directory (afs-style)
21:55 impl so the stat()s actually come into play when doing something like a git status
21:55 JoeJulian Yes, totally.
21:56 * JoeJulian considers...
21:57 JoeJulian I think there are tuning options for FSCache.
21:57 impl like, on the nfs side?
21:57 impl nfs client that is
21:58 JoeJulian GlusterFS->nfs->FSCache->user, so yes.
21:59 impl gotcha. ok, that's something i can experiment with for sure... at the moment though i can't even use nfs, because it's hanging so badly
22:00 JoeJulian http://www.linux-mag.com/id/7378/2/
22:00 glusterbot Title: FS-Cache & CacheFS: Caching for Network File Systems | Linux Magazine | Page 2 (at www.linux-mag.com)
22:01 impl cool, thanks
22:02 JoeJulian Hmm... this goes against everything I thought I knew about that: https://access.redhat.com/site/documentatio​n/en-US/Red_Hat_Enterprise_Linux/6/html/Sto​rage_Administration_Guide/fscachenfs.html
22:02 glusterbot Title: 10.3. Using the Cache With NFS (at access.redhat.com)
22:08 impl huh... interesting. so for example with a current git repository i'm using, without using -o fsc, i get ~2 seconds for a git status, whereas with glusterfs-fuse i get ~6 seconds. so actually that's due to some other feature of nfs than the cache?
22:11 impl hrm, maybe something weirder is going on. the system is hanging even when i'm not in the nfs-mounted directory
22:12 JoeJulian Those are always fun...
22:12 impl ok, once i umount it, it returns to its normal behavior. but otherwise just having it *mounted* makes the system hang before running any command from the shell... very weird
22:13 lmickh joined #gluster
22:14 jag3773 joined #gluster
22:16 JoeJulian home directory, perhaps .bash_history if you use PROMPT_COMMAND to keep history sync'ed.
22:16 impl ah, yeah, that could be.
22:16 impl makes sense
22:17 impl in fact i wonder if this is some crazy zsh flock() stuff
22:20 impl yeah. totally is. ok, well that solves that... thanks for the pointer
22:21 JoeJulian So this wouldn't help nfs, but assuming one user in one zone per home directory, eager locking might be useful.
22:23 wushudoin joined #gluster
22:28 impl that looks interesting- what would be the typical use case for having that disabled? usually the reason i've used optimistic locking is because the duration of the transaction is unpredictable... is that often the case for filesystems as well? (sorry, totally not my domain expertise if it's not obvious :P)
22:39 ProT-0-TypE joined #gluster
22:52 JoeJulian My understanding is that opportunistic locking allows the client to issue the lock before checking it in with the servers.
23:53 semiosis @lucky opportunistic locking
23:53 glusterbot semiosis: http://support.microsoft.com/kb/296264
23:53 semiosis hrm
23:56 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary