Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 glusterbot New news from newglusterbugs: [Bug 928781] hangs when mount a volume at own brick <http://goo.gl/ieOkk>
00:10 mtanner_w joined #gluster
00:16 raghug joined #gluster
00:22 foster joined #gluster
00:50 hateya joined #gluster
00:57 yinyin joined #gluster
01:10 jules_ joined #gluster
01:12 rastar joined #gluster
01:27 yinyin joined #gluster
01:39 portante joined #gluster
01:42 yinyin joined #gluster
01:43 kevein joined #gluster
01:44 yinyin joined #gluster
02:01 disarone joined #gluster
02:03 bala1 joined #gluster
02:13 daMaestro joined #gluster
02:15 yinyin joined #gluster
02:32 robos joined #gluster
02:39 glusterbot New news from newglusterbugs: [Bug 924132] reports a 503 error when download a container <http://goo.gl/pDZ8M>
02:52 glusterbot New news from resolvedglusterbugs: [Bug 923580] ufo: `swift-init all start` fails <http://goo.gl/F73bO>
02:53 bharata joined #gluster
03:02 yinyin joined #gluster
03:11 _pol joined #gluster
03:13 _pol_ joined #gluster
04:03 sripathi joined #gluster
04:07 yinyin joined #gluster
04:07 vpshastry joined #gluster
04:08 saurabh joined #gluster
04:08 rastar joined #gluster
04:19 nueces joined #gluster
04:21 pai joined #gluster
04:23 ultrabizweb joined #gluster
04:27 bala1 joined #gluster
04:33 pai joined #gluster
04:46 rastar joined #gluster
05:07 vpshastry joined #gluster
05:08 hagarth joined #gluster
05:11 yinyin joined #gluster
05:12 raghug joined #gluster
05:15 lalatenduM joined #gluster
05:19 pai joined #gluster
05:21 mohankumar joined #gluster
05:22 ultrabizweb joined #gluster
05:23 raghug joined #gluster
05:27 shylesh joined #gluster
05:32 satheesh joined #gluster
05:33 premera joined #gluster
05:35 yinyin joined #gluster
05:36 aravindavk joined #gluster
05:36 raghug joined #gluster
05:39 zwu joined #gluster
06:04 hagarth joined #gluster
06:17 vshankar joined #gluster
06:37 ricky-ticky joined #gluster
06:42 _br_ joined #gluster
06:42 ramkrsna joined #gluster
07:00 vimal joined #gluster
07:16 ngoswami joined #gluster
07:25 hagarth joined #gluster
07:27 raghug joined #gluster
07:29 ekuric joined #gluster
07:43 ramkrsna joined #gluster
07:43 vpshastry1 joined #gluster
07:56 guigui1 joined #gluster
08:00 ramkrsna left #gluster
08:09 andreask joined #gluster
08:16 sripathi joined #gluster
08:28 tjikkun_work joined #gluster
08:31 camel1cz joined #gluster
08:35 guigui3 joined #gluster
08:36 raghug joined #gluster
08:40 piotrektt joined #gluster
08:43 joeto joined #gluster
08:45 ekuric joined #gluster
08:50 camel1cz joined #gluster
08:50 camel1cz left #gluster
09:01 dobber_ joined #gluster
09:03 raghug joined #gluster
09:05 ramkrsna joined #gluster
09:05 ramkrsna joined #gluster
09:09 guigui3 joined #gluster
09:14 yinyin joined #gluster
09:17 ramkrsna joined #gluster
09:41 hateya joined #gluster
09:42 vpshastry joined #gluster
09:50 joehoyle joined #gluster
09:51 mtanner joined #gluster
10:14 ninkotech_ joined #gluster
10:17 bharata I am trying to convert a single brick dht volume into a double brick replica 2 kind of volume by adding a brick and manually triggering a self heal. Unable to understand the o/p of gluster heal info cmd (http://dpaste.com/1038808/). Why did "Number of entries" came down from 1 to 0 after healing ?
10:17 glusterbot Title: dpaste: #1038808 (at dpaste.com)
10:24 bharata Also when I start a manually healing as explained above, is there a way to know when the data from brick 1 got fully replicated to brick 2 ?
10:37 huyente joined #gluster
10:41 huyente glusterfs version 3.3.1,
10:41 huyente sometime I see a file lack of the read permission: writable, regular file, no read permission
10:41 huyente the log file said that:
10:41 huyente W [client3_1-fops.c:2630:client3_1_lookup_cbk] 0-gv0-client-1: remote operation failed: Stale NFS file handle. Path: /viewclick (58af05de-85ec-4c4b-a3b0-381a9f0126c4) W [afr-common.c:1196:afr_detect_self_heal_by_iatt] 0-gv0-replicate-0: /viewclick: gfid different on subvolume E [afr-self-heal-common.c:141​9:afr_sh_common_lookup_cbk] 0-gv0-replicate-0: Missing Gfids for /viewclick E [afr-self-heal-common.c:2160:​afr_self_heal_completion_cbk] 0-gv0-replicate-0
10:41 super_favadi joined #gluster
10:42 huyente Google pointed me to this page: http://mseas.mit.edu/downloa​d/phaley/Gluster/issues.html
10:42 glusterbot <http://goo.gl/LWMfY> (at mseas.mit.edu)
10:42 favadi joined #gluster
10:42 huyente but that file doesn't have the sticky bit
10:43 huyente what surprised me is: the client is mounted via Gluster Native Client, not NFS
10:43 redsolar_office joined #gluster
10:43 huyente anyone else get this problem?
10:44 mtanner joined #gluster
10:44 huyente I have tried to disable the stat-prefetch: performance.stat-prefetch: off but nothing change
10:45 favadi huyente: never heard of this problem
10:45 mtanner joined #gluster
10:48 huyente on the server, this seems to be a dead symlink: /export/brick1/viewclick: broken symbolic link to `../../00/00/00000000-0000-0000-​0000-000000000001/viewclick.bk'
10:54 favadi glusterbot: help
10:54 glusterbot favadi: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin.
11:01 huyente How to delete this folder?
11:33 badone joined #gluster
11:33 samppah @gfid
11:33 samppah @gfid
11:34 glusterbot samppah: The gfid is a uuid that's assigned to represent a unique inode that can be identical across replicas. It's stored in extended attributes and used in the .glusterfs tree. See http://goo.gl/Bf9Er and http://goo.gl/j981n
11:54 disarone joined #gluster
11:56 andreask joined #gluster
11:58 vpshastry joined #gluster
12:14 layer3switch joined #gluster
12:28 vikumar joined #gluster
12:31 NeatBasis joined #gluster
12:37 hateya joined #gluster
12:44 ultrabizweb joined #gluster
12:47 robos joined #gluster
12:49 joehoyle Hey, I have gluster setup, using it for webservers, I am getting some race conditions whereby a file is uploaded, then once finished the browser requests that image at the URL, however if this request goes to a second web server it sometimes 404s for that image, as it seems it hasn't got to the second web server somehow in time, waiting 1 second before requesting the image pretty much fixes the issue, is there anythign I can do to
12:52 rosmo set up some persistance on your load balancer
12:52 rosmo like 2-5 seconds
12:53 rosmo however i think gluster should be pretty much synchronous on that
12:54 joehoyle rosmo: hmm right, lb is a good idea, think I can do that with ec2 elb, is there a way to make gluster synchronous, not sure if it's even asynchronous is nature or not
12:55 rosmo i think it should be synchronous, but you might have a non-gluster issue
12:56 joehoyle rosmo: that is very possible! I'll give lb stickyness a go, thanks very much!
12:56 rosmo parallel stuff is pretty hard :)
12:56 joehoyle ya
12:56 joehoyle rosmo: I presume pretty much mutliserver web hosting needs some form of shared drive though right, and gluster seems to be the best thing I have found for that
12:57 rosmo well, all cluster filesystems have their cons and pros
12:57 joehoyle sure
12:58 rosmo gfs is nice but it's slow as balls for certain workloads
12:58 joehoyle rosmo: ok cool, maybe I look into some benchmarks for web hosting on ec2 specifically
12:59 joehoyle one thing I know, s3 is way too slow!
12:59 rosmo i guess best are non-filesystems (object storage servers, or even simple storage)
13:00 rosmo you're using ebs? by concept that can't be too fast
13:00 joehoyle I am, it's not very fast, good enough for web serving in my experiance
13:01 joehoyle if I had everythign on one ebs I would be happy with that
13:02 rosmo disclaimer: i've never used amazon's stuff but just thinking how it's done sort of says high latency
13:02 joehoyle yeah, compared to dedicated hardware it is, but I am running php / wordpress, so trust me, the latency is the least of my worries!
13:03 xiu hi, it seems that i have this problem: http://permalink.gmane.org/gmane.c​omp.file-systems.gluster.user/3594 but i can't find any solution (i'm running 3.2.6 on a distributed/replicated volume), is this a known problem ?
13:03 glusterbot <http://goo.gl/k2rE5> (at permalink.gmane.org)
13:03 joehoyle rosmo: well I just turned on sticky at that seems to have sorted that issue, so I can rest again! thanks very much :)
13:05 rosmo joyhoyle: no probs, happy to help :)
13:09 NeatBasis joined #gluster
13:09 aliguori joined #gluster
13:20 yinyin_ joined #gluster
13:24 JoeJulian xiu: it seems pretty unlikely that you're getting the same problem as one that's 3 years old, even with your year and a half old version. So what are your symptoms?
13:25 xiu JoeJulian: i get some no such file or directory issues on one node
13:27 JoeJulian node = smurf
13:27 xiu i think it's due to the fact that i'm manipulating the files from multiple clients in a short time
13:27 JoeJulian Besides being ANY endpoint on your network, what's a node mean in that sentence?
13:28 xiu sorry, one client
13:28 JoeJulian Fuse or nfs?
13:28 xiu fus
13:28 xiu e
13:29 JoeJulian The target that's throwing the error, is it a file or directory?
13:30 JoeJulian I suspect you're right about it being a race condition between clients, but I'll still see if there's anything we can do to mitigate that.
13:32 xiu JoeJulian: a directory
13:32 xiu i tried to add these options to my volume:performance.write-behind: off
13:32 xiu performance.flush-behind: off
13:32 xiu performance.io-cache: off
13:33 xiu it seems that the issue is occuring less frequently but it is still there
13:33 JoeJulian Oooh, there's a mount option that might help too... now what was that...
13:35 JoeJulian I just saw it mentioned recently somewhere. I thought it was direct-io-mode=true, but I haven't found that reference yet.
13:40 xiu hum i don't use o_direct, how would it help me ?
13:40 JoeJulian Might not... Still trying to find that reference.
13:41 xiu ok thanks
13:41 JoeJulian http://www.mail-archive.com/gluste​r-users@gluster.org/msg11320.html
13:41 glusterbot <http://goo.gl/1fe8p> (at www.mail-archive.com)
13:42 JoeJulian a. Either app opens with O_DIRECT or mount glusterfs with --enable-direct-io to keep page-cache out of the way of consistency
13:43 JoeJulian And that mount option won't pass the tests in mount.glusterfs so you'd either have to modify that to accept that parameter, or mount using the glusterfs command if you want to try that.
13:44 JoeJulian You might also want to analyze an strace of your application to see what it's doing that could make a directory be missing. If it's renaming directories, I could see that happening a lot.
13:45 JoeJulian Or removing and recreating them.
13:48 camel1cz joined #gluster
13:48 camel1cz left #gluster
13:49 xiu JoeJulian: thanks!
14:01 lanning joined #gluster
14:09 camel1cz joined #gluster
14:09 disarone joined #gluster
14:12 camel1cz left #gluster
14:18 rastar joined #gluster
14:44 hateya joined #gluster
14:52 bugs_ joined #gluster
14:54 daMaestro joined #gluster
15:05 NeatBasis_ joined #gluster
15:20 guigui3 joined #gluster
15:21 jag3773 joined #gluster
15:24 awheeler_ Is there a doc for migrating volumes from 3.3 to 3.4?  And vice-versa?
15:38 vpshastry left #gluster
15:42 jthorne joined #gluster
15:50 lh joined #gluster
15:50 lh joined #gluster
15:54 jthorne hello #gluster. i have a client with a 4 node distributed gluster cluster in AWS. Amazon is forcing them to stop and start one of the nodes to move it to new hardware, doing so will change the private IP this gluster node is using to talk to the other gluster nodes. is there a slick way to make this change without too much hassle or will the client need to 1) bring up a new node and rebalance, 2) rebalance onto 3 nodes, bring
15:54 jthorne down the one node, then bring back up and rebalance again, or 3) treat this as a failed server? thanks
15:57 NuxRo jthorne: i guess the best thing to do here is a brick-replace, if you have time
15:57 NuxRo bring online a new brick then replace the one going offline
15:57 jthorne that's probably the safest bet
15:58 NuxRo yep
15:59 jthorne cool thanks.
16:00 NuxRo yw, good luck
16:03 _pol joined #gluster
16:04 _pol joined #gluster
16:07 _pol joined #gluster
16:08 _pol joined #gluster
16:12 flrichar joined #gluster
16:25 hagarth joined #gluster
16:31 hagarth joined #gluster
16:35 Mo_ joined #gluster
16:38 joshcarter joined #gluster
16:38 joshcarter joined #gluster
16:45 nueces joined #gluster
16:53 tomsve joined #gluster
16:55 red_solar joined #gluster
16:57 andreask joined #gluster
17:00 shylesh joined #gluster
17:01 tomsve joined #gluster
17:28 awheeler_ How do I join the gluster-users mailing list?  http://www.gluster.org/interact/mailinglists/ doesn't tell me.
17:28 glusterbot Title: Mailing Lists | Gluster Community Website (at www.gluster.org)
17:28 awheeler_ Doh, well, it tells me how to unsubscribe, and that takes me to how I can subscribe.  lol
17:34 TomS joined #gluster
17:42 ninkotech__ joined #gluster
17:43 ninkotech joined #gluster
17:52 joehoyle joined #gluster
18:05 ninkotech__ joined #gluster
18:05 ninkotech_ joined #gluster
18:05 ninkotech joined #gluster
18:11 mnaser joined #gluster
18:13 NeatBasis joined #gluster
18:18 joe joined #gluster
18:21 semiosis johnmark: ping
18:24 samppah is it possible to create new volume using lvm snapshot as backend?
18:25 samppah volume creation seems to fail but it doesn't show any proper error message
18:25 ninkotech_ joined #gluster
18:25 ninkotech joined #gluster
18:26 ninkotech__ joined #gluster
18:27 awheeler_ joined #gluster
18:31 JoeJulian samppah: Normal caveat's wrt lvm snapshots... make sure they've been allocated extents so they're not read-only. Since you're adding a brick that's already considered part of a volume, if you're running 3.3+ you'll need to clear the extended attributes per ,,(reuse brick)
18:31 glusterbot samppah: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
18:32 samppah JoeJulian: yes, i have done that as explained in your block
18:33 samppah although i'm bit unsure if it's necessary to glear attributes and remove .glusterfs directories from all subdirectories?
18:33 JoeJulian no, just the brick root.
18:34 JoeJulian I'm not even convinced that removing .glusterfs is required.
18:40 samppah aargh..
18:41 samppah traditional pebkac error...
18:42 samppah tried to create volume using wrong hostname...
18:55 ninkotech_ joined #gluster
18:55 ninkotech joined #gluster
18:55 ninkotech__ joined #gluster
18:59 _pol joined #gluster
19:03 andreask joined #gluster
19:06 JoeJulian :)
19:14 glusterbot New news from newglusterbugs: [Bug 916372] NFS3 stable writes are very slow <http://goo.gl/Z0gaJ>
19:34 _pol_ joined #gluster
19:36 _pol joined #gluster
19:57 __Bryan__ joined #gluster
19:58 lh joined #gluster
20:07 _pol joined #gluster
20:22 _pol joined #gluster
20:23 _pol joined #gluster
20:25 _pol joined #gluster
20:25 _pol joined #gluster
20:35 rotbeard joined #gluster
21:39 disarone joined #gluster
21:40 disarone joined #gluster
21:45 Mo_ joined #gluster
22:02 ferrel joined #gluster
22:09 ferrel joined #gluster
22:11 ferrel geo-replication question for someone... Say I have a volume containing virtual machine images and I have some machines that are allocated multiple image files. Inside the VMs the volumes are raided or LVM'd together so if the image files are not "snapshot" or replicated at the exact same point I could easily have file corruption. Does geo-replication get a "point in time" version consistent across all files in a volume? or is it un
22:11 ferrel ique for each file as it is rsync'd ?
22:40 joehoyle joined #gluster
22:42 joehoyle joined #gluster
22:58 pib1944 joined #gluster
23:08 layer3switch joined #gluster
23:16 JoeJulian ferrel: each file
23:23 manik joined #gluster
23:29 NeatBasis joined #gluster
23:33 layer3switch joined #gluster
23:33 ferrel JoeJulian: ahh... bad news, but thanks
23:44 ferrel JoeJulian: any reason I wouldn't be able to LVM snapshot an underlying brick's filesystem to get a consistent image from across multiple files?
23:46 lanning currently gluster does not work below the local brick filesystem
23:47 lanning the snapshot would be separate and gluster would know nothing about it (for georep or otherwise)
23:48 ferrel correct, I'm assuming I'd have to handle the rsync on my own at that point etc... I'm trying to find out how I can backup the volumes with consistency between multiple files
23:48 lanning there is talk about adding a higher level snapshot by synchronizing LVM snapshots across the volume.
23:49 lanning if your volume is just a replica, then you can snapshot one brick and rsync to your hearts content
23:50 lanning if it is dht (distributed), then you will have to sync the snapshots of all bricks in the volume
23:50 ferrel perhaps  better approach would be to just make sure each of our virtual machines only have 1 image file attached... I think I could just use geo rep then
23:50 ferrel yeah, we have just a replica setup for VM image file storage
23:53 ferrel another question if someone can answer? ... is it "safe" to create a new replica volume using two Bricks that have different files on them? essentially I want to Merge the filesystems
23:54 lanning no, copy the second brick to the first, then clear the second, then create your volume
23:54 ferrel I did a few small tests with little text files and it seemed to work... but I'll working with multiple 100's of GB's in production with the image files
23:55 ferrel ... sounds reasonable
23:55 lanning oh, and a side note with the georep, you will want the snapshot anyway.  Otherwise you will have issues with VM image updates while the image is being rsync'd
23:56 lanning this is unless the georep daemon stages the file before sync'ing, which I doubt it does.
23:56 ferrel I see, good to know

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary