Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 edong23 joined #gluster
00:03 yinyin joined #gluster
00:06 StarBeast joined #gluster
00:25 jiku joined #gluster
00:57 Kins joined #gluster
00:58 jiku joined #gluster
01:20 majeff joined #gluster
01:34 bala joined #gluster
01:41 kevein joined #gluster
01:53 jiku joined #gluster
02:42 red-solar joined #gluster
02:57 kshlm joined #gluster
03:00 bharata joined #gluster
03:18 saurabh joined #gluster
03:22 vshankar joined #gluster
03:26 anands joined #gluster
03:43 mohankumar__ joined #gluster
03:53 vpshastry joined #gluster
04:11 vpshastry joined #gluster
04:12 sgowda joined #gluster
04:20 ngoswami joined #gluster
04:31 ngoswami joined #gluster
04:35 majeff joined #gluster
04:38 ngoswami joined #gluster
04:47 lalatenduM joined #gluster
04:50 shylesh joined #gluster
05:29 satheesh joined #gluster
05:35 bulde joined #gluster
05:37 bala joined #gluster
05:40 hagarth joined #gluster
05:46 sgowda joined #gluster
05:56 bharata joined #gluster
06:02 ngoswami_ joined #gluster
06:03 StarBeast joined #gluster
06:08 rastar joined #gluster
06:10 sgowda joined #gluster
06:14 rgustafs joined #gluster
06:16 rwheeler joined #gluster
06:17 aravindavk joined #gluster
06:18 glusterbot New news from newglusterbugs: [Bug 959887] clang static src analysis of glusterfs <http://goo.gl/gf6Vy>
06:22 koubas joined #gluster
06:22 16WAAD1FA joined #gluster
06:24 jtux joined #gluster
06:25 majeff joined #gluster
06:30 satheesh joined #gluster
06:57 vimal joined #gluster
07:05 ctria joined #gluster
07:08 bala joined #gluster
07:10 tjikkun_work joined #gluster
07:10 ricky-ticky joined #gluster
07:12 bala joined #gluster
07:18 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
07:18 hybrid512 joined #gluster
07:20 bharata joined #gluster
07:24 sgowda joined #gluster
07:28 andreask joined #gluster
07:53 puebele joined #gluster
07:54 dobber joined #gluster
07:57 thomasle_ joined #gluster
08:08 tjikkun_work joined #gluster
08:13 puebele1 joined #gluster
08:15 Norky joined #gluster
08:18 mariusp_ joined #gluster
08:19 vpshastry joined #gluster
08:22 dxd828 joined #gluster
08:23 spider_fingers joined #gluster
08:24 StarBeast joined #gluster
08:31 thomaslee joined #gluster
08:32 lanning joined #gluster
08:32 hchiramm_ joined #gluster
08:37 Staples84 joined #gluster
08:38 atrius_ joined #gluster
08:57 duerF joined #gluster
08:59 majeff1 joined #gluster
08:59 zoldar recently, due to hardware error, I had to remove some bricks and add them later. Now the problem is that when I shutdown the other node (which is used as volfile-server by clients), the volumes become unavailable. What may I be missing? How to make clients aware of the re-added bricks?
08:59 deepakcs joined #gluster
09:08 kelkoobenoitr hello, i have the following setup:
09:08 kelkoobenoitr 3 DellR620 with 5T Raid0 bricks over 10Gb links
09:08 kelkoobenoitr 1 Client accessing through a 10Gb link
09:08 kelkoobenoitr If i create a volume made of 1 brick on 1 R620, i reach 350MB/s
09:08 kelkoobenoitr If i create a volume made of 3 bricks (one on each R620), i also reach the same write speed, even if data is spread accross the 3 bricks.
09:08 kelkoobenoitr why ?
09:11 kelkoobenoitr FYI, local disks are saturating at 850MB/s, and network tested with iperf goes up to 950MB/s
09:11 vpshastry joined #gluster
09:12 kelkoobenoitr i used the following command to create the volume: gluster volume create test-volume 160.103.197.23:/nobackup 160.103.197.25:/nobackup 160.103.197.26:/nobackup
09:18 anands joined #gluster
09:18 manik joined #gluster
09:20 hchiramm_ joined #gluster
09:41 majeff joined #gluster
09:53 js_ do clients offer anything to the cluster itself except extra load?
09:54 lh joined #gluster
09:54 lh joined #gluster
10:03 spider_fingers joined #gluster
10:15 samppah @latest
10:15 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
10:15 bharata joined #gluster
10:19 andreask joined #gluster
10:25 bala joined #gluster
10:26 js_ if i have two nodes with 100 gb each and replicate, how will adding a third node affect performance/size/etc?
10:36 spider_fingers joined #gluster
10:37 NuxRo js_: you want to have 3x replicas?
10:38 js_ NuxRo: i don't know, what would it mean?
10:40 NuxRo js_: if you run a setup with replica (I assume 2x) then you can't just add another node/brick, you must add a number of bricks multiple of the replica factor
10:40 NuxRo so if you want to enlarge a replica 2 volume you need to add at least 2 new bricks
10:41 js_ NuxRo: all right, and does this mean i can replicate two replicas, or even stripe (simulating raid 1+0)?
10:42 js_ if i have 2 servers with 1 brick each on 100gb, and i then make a new replica of 2 new servers with one brick each of 100gb, can i then stripe these two replicas together to construct a 200gb volume?
10:43 NuxRo if i got it right, then no
10:44 NuxRo but out of 4 bricks you can build a replicated+striped volume
10:44 js_ so it's always brick + brick, never group of bricks + group of bricks?
10:44 js_ ok
10:45 NuxRo well, it's all bricks, but the way you put it earlier read as if you actually wanted to combine 2 volumes in another one, which is not possible afaik
10:46 NuxRo so if you have 4 bricks and you want something like "raid10" it can be done specifying "stripe 2 replica 2 brick1 brick2 brick3 brick4"
10:46 badone joined #gluster
10:47 NuxRo js_: you should read this http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
10:47 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
10:47 js_ NuxRo: cool, thanks a lot for your help
10:47 NuxRo np
10:51 anands joined #gluster
10:59 vpshastry joined #gluster
11:02 majeff joined #gluster
11:04 lpabon joined #gluster
11:05 wgao joined #gluster
11:06 wgao joined #gluster
11:20 flrichar joined #gluster
11:24 hagarth joined #gluster
11:27 zoldar is it possible that two bricks of the same, replicated volume differ in space used? The hard disks and filesystems are the same for both. In one case it's a difference of just 30MB (228GB used), but in the case of other volume on the same pair of peers is about ~4 GB (103GB vs 98G used).
11:38 ujjain joined #gluster
11:41 charlescooke joined #gluster
11:52 zoldar ok, seems that "df" isn't reliable in that case - I did "du -s" on each brick and results are very close. Maybe not identical, but not a couple GB off either
11:54 hagarth joined #gluster
11:58 Guest80797 joined #gluster
12:01 Airbear_ joined #gluster
12:05 Airbear_ Hi, does anyone have any experience of running a gluster volume rebalance on a volume which runs qemu VM disk images?
12:05 Airbear_ ^gluster 3.3.1
12:25 yinyin_ joined #gluster
12:27 msmith_ joined #gluster
12:31 manik1 joined #gluster
12:31 mrEriksson joined #gluster
12:31 mrEriksson Hello folks
12:32 dastar_ joined #gluster
12:32 mrEriksson Is there a way to create a gluster volume with only one brick? I wish to have a replicated volume over two servers, but only one of them is available at the moment. But I would still like to get Gluster up and running and then add the second brick when the second server is up and running. Is that possible?
12:45 Airbear_ mrEriksson: From memory, the gluster volume create command will not let you use bricks which are not available when you run the command.  You could setup a temporary gluster server with the appropriate brick and run the gluster vol create command, then shutdown that server to leave things as you want them.
12:46 Airbear_ When the time is right, just follow the instructions to replace a dead node: http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Brick_​Restoration_-_Replace_Crashed_Server
12:46 ndevos mrEriksson: yes, you can 'gluster volume create THE_VOLUME pub_hostname:/bricks/simple/volume'
12:46 glusterbot <http://goo.gl/60uJV> (at gluster.org)
12:47 mrEriksson Airbear_: Correct, it won't let me create a volume with only one brick
12:47 Airbear_ No, I meant what ndevos says.
12:47 Airbear_ Only I thought it wouldn't work!
12:47 ndevos mrEriksson: it will allow that, just dont use 'localhost' or such, try the public ip-address if all else fails
12:48 mrEriksson ndevos: Aah, and afterwards, I can add a replica brick to the volume?
12:49 aliguori joined #gluster
12:49 ndevos mrEriksson: yes you can, I think the command looks like 'gluster volume add-brick replica 2 new_server:/bricks/second/one'
12:51 mrEriksson Perfect! Thanks alot!
12:52 mrEriksson I didn't realize that the replica-option wasn't mandatory, and googling it just led me to pages saying that I needed the same number of bricks as replicas (or x times as many)
12:55 mrEriksson How can I 'reuse' a brick? I did some tests and created a volume. But now I'be deleted that volume and I'm trying to add the brick to a new volume instead, but getting the error that the brick is already in use
13:08 edward1 joined #gluster
13:10 clutchk joined #gluster
13:14 Brian_TS joined #gluster
13:19 majeff joined #gluster
13:22 Airbear_ mrEriksson: I always reformat the brick filesystem and delete the mountpoint directory.  I don't know specifically what needs to be done.
13:22 Airbear_ JoeJulian has an article somewhere which describes the details.
13:23 36DAAPD81 joined #gluster
13:23 hchiramm_ joined #gluster
13:26 samppah hmm, has there been some changes on 3.4 beta-2 that prevent storing vm images on glusterfs when network.remote-dio is disabled?
13:27 mrEriksson Airbear_: Formating the partition did the trick, thanks!
13:28 plarsen joined #gluster
13:28 yinyin_ joined #gluster
13:31 spider_fingers left #gluster
13:33 mohankumar__ joined #gluster
13:37 Airbear_ samppah: qemu, by default, will open qcow2 disks with O_DIRECT call.  This must be supported by the FUSE kernel module and glusterfs.
13:40 portante joined #gluster
13:42 JoeJulian Airbear_: I've done that before (rebalance on vm images). Raw images with running kvm instances. I was frankly rather surprised that it worked.
13:43 samppah Airbear_: nod, i'm wondering that i'm pretty sure it worked out of the box with 3.4 alpha without enabling remote-dio
13:43 JoeJulian raw images work
13:43 JoeJulian well, should work
13:44 samppah i think my previous tests was with qcow2 images aswell
13:45 Brian_TS left #gluster
13:45 JoeJulian Did you enable write-through caching for the disk image? iirc, that avoids o_direct also.
13:45 mariusp joined #gluster
13:45 samppah no, it was with cache set to none
13:46 JoeJulian Must have been a bug in that alpha, then. Maybe the dio bit was flipped.
13:47 rb2k joined #gluster
13:47 samppah yeah, it must have been something like that
13:47 Airbear_ JoeJulian: Previous versions would fail to rebalance files which were modified during rebalance.  I think 3.3.1 uses a sliding lock similar to 3.3.1 self-heal.  We have replaced crashed bricks and the self-heal which resulted was non-blocking for running VMs.  Although, any new call to open a file triggered a blocking self-heal.
13:48 Airbear_ ^ We use .qcow2 images, not raw.
13:48 rb2k gluster volume remove-brick test-fs-cluster-1 replica 2 fs-19.example.com:/mnt/brick37 start ====>  stderr was: number of bricks provided (1) is not valid. need at least 2 (or 2xN)
13:48 rb2k I don't quite get the error
13:48 rb2k I said replication factor of two
13:48 mooperd joined #gluster
13:48 rb2k oh wait, never mind. It switched to distributed for some reason
13:48 samppah rb2k: how many bricks there are currently
13:48 samppah 'oh ok
13:49 rb2k it's currently 3
13:49 rb2k but I'm not quite sure why it switched over to disributed
13:50 lpabon joined #gluster
13:55 rb2k is there an easy way to change a volume from distributed to replicated?
13:55 JoeJulian Add an equal number of bricks and "replica 2"
13:55 rb2k is there a way to do it without adding bricks?
13:56 rb2k e.g. readding the current ones?
13:56 rb2k I don't even know how it switched over to distributed
13:56 rb2k probably some error in the automation scripts
13:56 JoeJulian remove-brick down to your target number first...
13:58 rb2k ok, now I have a distributed volume with 2 bricks
14:00 rb2k I guess the main problem was that I wanted to replace a brick, but it turned into add + remove
14:00 rb2k and that somehow broke the replicated status into a distributed one
14:02 JoeJulian Only way to have done that would have been to specify replica 1
14:03 rb2k ha, ok
14:03 rb2k I could just remove one of the bricks and readdit as replica 2
14:03 rb2k JoeJulian: replica 1 isn't allowed using the CLI though, right?
14:04 JoeJulian You could not add the "replica n" stanza at all, too.
14:04 JoeJulian Sure it is. That's how you convert a replica volume into a non-replicated one.
14:11 bugs_ joined #gluster
14:13 lnxsix joined #gluster
14:15 mooperd_ joined #gluster
14:17 ninkotech__ joined #gluster
14:18 clag__ joined #gluster
14:18 eryc_ joined #gluster
14:18 johnmark joined #gluster
14:18 mindbender joined #gluster
14:18 aliguori_ joined #gluster
14:18 ctria joined #gluster
14:18 rosco joined #gluster
14:18 VSpike joined #gluster
14:18 Goatbert joined #gluster
14:19 satheesh joined #gluster
14:20 Airbear joined #gluster
14:31 daMaestro joined #gluster
14:33 __Bryan__ joined #gluster
14:37 jtux joined #gluster
14:38 JonnyNomad joined #gluster
14:42 jbrooks joined #gluster
14:45 joelwallis joined #gluster
14:46 lllux joined #gluster
14:49 jbrooks joined #gluster
14:50 kaptk2 joined #gluster
14:52 hjmangalam1 joined #gluster
14:54 Rocky__ joined #gluster
14:55 ThatGraemeGuy joined #gluster
14:57 lllux Hi all, anyone using gluster natively with qemu know anything about this error? i am able to mount the vol images using mount -t gluster and view the test.img file but: -drive file=gluster://192.168.0.50/images/te​st.img,if=none,id=drive-virtio-disk0: could not open disk image gluster://192.168.0.50/images/test.img: No data available
15:21 manik1 joined #gluster
15:23 gmcwhistler joined #gluster
15:26 jthorne joined #gluster
15:33 hchiramm_ joined #gluster
15:35 duerF joined #gluster
15:36 aliguori joined #gluster
15:41 ixmun joined #gluster
15:43 ixmun Someone knows if it is possible to share log files of apache servers for example, in multiple glusterfs clients?
15:43 ixmun well, it is possible... but, can it be bad?
15:43 ixmun (I am not much experienced on glusterfs)
15:44 semiosis ixmun: could you do it with NFS?
15:44 semiosis having multiple apaches write logs to the same file doesnt seem like a good idea in general
15:44 ixmun semiosis: I don’t really know
15:45 semiosis you probably *can* do it, but why?
15:45 semiosis check out logstash for log aggregation
15:45 semiosis @lucky logstash
15:45 glusterbot semiosis: http://logstash.net/
15:45 ixmun I will have several servers sharing load, visitors... and want to have a single place where to look logs at.
15:46 semiosis logstash + kibana FTW
15:46 semiosis brb
15:46 __Bryan__ you can use log stash to ship all logs to the log stash server and put into elasticsearch with kibana?.I would suggest kibana3
15:47 __Bryan__ left #gluster
15:53 ixmun well I imagined that having loads of writes to a single file on glusterfs could not be a good idea
15:54 ixmun just had to ask
15:54 ixmun so is logstash the only, or best, viable way to achieve this?
15:54 ixmun sounds pretty good so far.
15:56 daMaestro joined #gluster
15:56 anands joined #gluster
15:56 n70247 joined #gluster
15:57 n70247 left #gluster
15:57 devoid joined #gluster
15:59 n70247 joined #gluster
15:59 n70247 left #gluster
16:00 JoeJulian I agree, logstash is an excellent solution.
16:01 semiosis gluster should support many processes writing to the same file, this just isn't a great example of that workload
16:02 ixmun Lets say, semiosis, it could work fine... though it is not the finest solution for that specific issue?
16:02 ixmun well, could, or will
16:02 semiosis sure
16:03 JoeJulian One easy way to find out... :)
16:04 jag3773 joined #gluster
16:08 robo joined #gluster
16:10 ixmun JoeJulian: ...in production? D:
16:10 ixmun lol...
16:10 JoeJulian :D
16:10 ixmun now lets imagine that we cannot use logstash, what would you do instead?
16:10 ixmun I will use logstash... I am just curious and want to know.
16:11 hchiramm_ joined #gluster
16:14 StarBeast joined #gluster
16:15 majeff joined #gluster
16:23 bturner joined #gluster
16:32 soukihei joined #gluster
16:32 portante joined #gluster
16:33 shylesh joined #gluster
16:34 majeff left #gluster
16:37 hchiramm_ joined #gluster
16:41 hjmangalam2 joined #gluster
16:43 bennyturns Hi #gluster.  I am tinkering with a swap file on glusterfs but I am having trouble with swapon http://fpaste.org/15033/75931913/.  Anyone have an idea why I am getting the EINVAL?
16:44 glusterbot Title: #15033 Fedora Project Pastebin (at fpaste.org)
16:52 tg2 I have 2 servers, with 2 bricks each, in a distributed setup.   I want to take 1 server offline for repairs, there is more than enough room on each server to hold the entire volume, so I just want to move files from server 2's bricks to server1's bricks, how can this be done, replace-brick requires a fresh new brick which I don't have.
16:52 hjmangalam1 joined #gluster
16:54 hagarth joined #gluster
17:00 aliguori joined #gluster
17:02 JoeJulian tg2: With 3.3 you can use remove-brick. The data will be migrated off.
17:04 JoeJulian bennyturns: http://svn0.us-east.freebsd.org/base​/projects/fuse/sbin/swapon/swapon.c implies that the NSWAPDEV limit was reached when getting EINVAL on swapon.
17:04 glusterbot <http://goo.gl/EXOui> (at svn0.us-east.freebsd.org)
17:06 JoeJulian bennyturns: But Goswin points out that you REALLY don't want to back swap space with a fuse filesystem: http://permalink.gmane.org/gmane.​comp.file-systems.fuse.devel/8320
17:06 glusterbot <http://goo.gl/90M4m> (at permalink.gmane.org)
17:06 thomaslee joined #gluster
17:08 bennyturns JoeJulian, yepo I found a repro for a bug I am working on that has a FUSE based FS for swap. was just trying it out.  Looking at the swapon source I think it is looking at FS type, maybe it doesn't recognize the FS?  iirc you opened the BZ https://bugzilla.redhat.com/show_bug.cgi?id=764964
17:08 glusterbot <http://goo.gl/f1nvV> (at bugzilla.redhat.com)
17:08 glusterbot Bug 764964: low, medium, ---, aavati, VERIFIED , deadlock related to transparent hugepage migration in kernels >= 2.6.32
17:09 JoeJulian yep, that was me
17:09 Mo__ joined #gluster
17:09 bennyturns JoeJulian, what were you doing on gluster to hit the hang?  I haven't hit it on my 6.1 client
17:11 hjmangalam joined #gluster
17:12 JoeJulian Just normal activity. It required THP to be migrating pages at the same time glusterfs (the client) was trying to allocate more memory. Each would lock the memory, one for allocation and one for the move, creating a deadlock.
17:13 JoeJulian fuse, and anything that tried to interface with it would lock up. I could break out of that by killing glusterfs.
17:15 bennyturns JoeJulian, kk.  I found a repro for the orig BZ using NTFS as swap.  I figured I would just edit the script to run with a swap file on gluster and it would be about the same thing.
17:16 JoeJulian Perhaps glusterfs doesn't implement the swapon call.
17:16 JoeJulian I don't know though.
17:17 JoeJulian Just guessing from the strace.
17:17 bennyturns JoeJulian, np, thks for the input!  I was more curious than anything, I'll take a different approach here
17:17 JoeJulian sshfs maybe?
17:21 tg2 @JoeJulian - is there a remove-brick specific command or it prompts in 3.3 to move data off?
17:22 JoeJulian tg2: "remove-brick...start" starts the migration. "remove-brick...status" is used to monitor the progress. "remove-brick...commit" finalizes the remove once the data's migrated off.
17:30 hjmangalam joined #gluster
17:32 robo joined #gluster
17:32 __Bryan__ joined #gluster
17:32 failshell joined #gluster
17:33 failshell anyone here deploying software on a gluster volume using git? i get many errors
17:33 JoeJulian I use git all the time on gluster volumes.
17:34 JoeJulian What kind of errors are you seeing?
17:34 failshell i deploy the code with chef's git resource. and it borks because files change during the deployment
17:34 failshell was hoping to store our git repos on gluster
17:35 bulde joined #gluster
17:35 JoeJulian Perhaps http://joejulian.name/blog/br​oken-32bit-apps-on-glusterfs/ ?
17:35 glusterbot <http://goo.gl/4T31C> (at joejulian.name)
17:37 failshell unlikely, ruby's 64bit
17:38 JoeJulian Which version?
17:39 JoeJulian 3.3?
17:39 failshell 3.2
17:39 JoeJulian Ah, ok.
17:39 JoeJulian Probably the same error I was having with bazaar.
17:39 failshell need to upgrade?
17:39 JoeJulian ... though, strangely, I never did see that with git.
17:41 lpabon joined #gluster
17:42 JoeJulian failshell: Try setting "performance.write-behind off"
17:42 failshell what does that do?
17:42 JoeJulian turns off write-behind caching
17:43 failshell can that be set on a per volume basis?
17:43 JoeJulian Looking back in my history, that's something that either avati or hagarth suggested to prevent that issue with bazaar.
17:43 JoeJulian yes
17:45 ixmun JoeJulian: don’t want to be pushy but may I ask you about the last question I made.
17:45 brunoleon joined #gluster
17:45 JoeJulian I don't have an answer. Depends on the tool and the result.
17:46 ixmun ah uh ok
17:46 ixmun thanks
17:47 balunasj joined #gluster
17:47 JoeJulian Frankly, most of my web site are nginx passthrough to fcgis. The few apache servers I had I just used their local log files until I found logstash and got it up and running one afternoon.
17:48 failshell JoeJulian: reading the doc, disabling that seems to not be so good performance wise
17:48 JoeJulian Now all of my machines feed all their logs there. It's invaluable to have them all in one easy to use place.
17:49 failshell i regret deploying 3.2 because it was part of EPEL
17:49 ixmun JoeJulian: could you deliver, mix, all the previous logs when you set up logstash?
17:51 zaitcev joined #gluster
17:51 JoeJulian Heh, yeah, especially if that was the reasoning. There's policies behind version changes not getting into EPEL. It's not because it's better supported, but rather because it might break something you already had working. That's why epel5 never made it past 2.0.
17:51 JoeJulian ixmun: Yep
17:52 JoeJulian And not to be the off-topic police, but #logstash is an awesome channel.
17:52 failshell im dragging that technical debt
17:53 JoeJulian The only reason not to upgrade is because it required downtime. All nodes (and I use that word correctly here) have to be upgraded at once.
17:53 failshell yeah but ill have to do it eventually
17:53 failshell might as well be soon, since we dont have too much deployed on it
17:54 ixmun thanks JoeJulian, I now know what to do.
17:54 failshell but we have a high profile website running off it, that one is trickier
17:54 JoeJulian I just took advantage of memorial day to do some heavy downtime maintenance.
17:54 failshell i could upgrade at our HQ, but i could not mount my remote cluster to backup if i do that
17:54 failshell im in a catch 22
17:55 JoeJulian I hear ya. I was a strong proponent for providing abi translation between versions to prevent that problem, but it just wasn't feasible for the 3.2 -> 3.3 change.
17:55 failshell can you geo-replicate from a 3.2 to a 3.3?
17:55 JoeJulian I don't see why not.
17:56 failshell or i could mount the volume locally on a brick and rsync from there
17:56 failshell hmm
17:56 JoeJulian Oh, yes I do...
17:57 JoeJulian If you georep to it as if it were just a remote directory it should be fine. As a gluster volume it wouldn't work.
17:59 failshell i need to plan this soon, in between a chef and sensu upgrades
17:59 failshell and all the other stuff pilling up on my plate
18:01 JoeJulian I hear that... I was just sitting here lamenting trying to prioritize this pile...
18:02 JoeJulian Especially with Summit coming up fast.
18:02 dbruhn joined #gluster
18:05 tg2 awesome, thakns Joe.
18:05 failshell this thing is growing, need to provision another 500GB in it
18:05 tg2 I didn't see a complete answer in the docs
18:05 failshell so that's 1TB of real storage
18:05 JoeJulian tg2: You're welcome.
18:05 JoeJulian failshell: 1tb?
18:06 failshell i use 2 replicas
18:06 failshell so have 2 copies of each file
18:06 failshell i run a distributed replicated cluster
18:06 JoeJulian failshell: but that's all? I thought your storage was bigger.
18:06 failshell i already have 1TB in it
18:07 * JoeJulian has 8tb in his media server for home....
18:07 failshell lol ok
18:07 JoeJulian Ironically, that's the biggest data system I get to play with.
18:07 failshell see another technical debt im dragging is that i created 128GB bricks
18:08 JoeJulian Sometimes that's a good thing... <shrug>
18:11 chirino joined #gluster
18:11 failshell im not convinced my backup strategy with gluster is good
18:11 dbruhn Crap, does anyone have an automated way to clean up some split-brain stuff. I have 1023 files showing in split brain on a distributed- replicated system
18:11 dbruhn 3.3.1
18:15 failshell JoeJulian: is it better to scale out with gluster? more VMs instead of fewer? i guess that would yield a better aggregate perf?
18:16 edoceo dbruhn: read this about .directory http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
18:16 glusterbot <http://goo.gl/j981n> (at joejulian.name)
18:16 edoceo I was just having this issue but on 3.2.5
18:16 JoeJulian Depending on your use case, yes. Most often it's better to scale out.
18:17 JoeJulian dbruhn: If you know one drive is bad, wipe it and let it re-replicate?
18:18 edoceo So, so in some cases, I did this crazy thing where I used rsync to back-end replicate outside of Gluster, wiped gfids, restarted the volume and tried again.
18:18 edoceo I had to watch log files a lot to find the problem directories/files - not fully automated :(
18:19 dbruhn Joe, the brick has 3.5TB of data on it.
18:21 JoeJulian ... well you did say automated.
18:21 JoeJulian And 1023 sounds suspiciously like you're filling a fixed-length list.
18:21 cfeller joined #gluster
18:22 dbruhn Yeah, true. I was just hoping there was a way that wouldn't take forever, obviously wiping the brick and letting it clean up would automate, manually fixing is going to take a while.
18:22 JoeJulian btw... that's something else I've been pushing for for years.
18:22 dbruhn I was just wondering if the was a script that would check both bricks, see the file with the larger size and use that as the good file kind of thing.
18:22 __Bryan__ Ihave found playing with the self heal settings can drasticallychange the recovery time...
18:22 __Bryan__ mine recover faster than the backend rsync, grid wipe then self heal
18:23 awheeler_ joined #gluster
18:23 __Bryan__ err gfid wipe?.stupid auto correct
18:30 dbruhn whats the best practice to destroying a brick and causing it to repopulate from the replicant brick?
18:34 failshell JoeJulian: disabling the write cache fixed my git issue
18:35 JoeJulian excellent
18:36 JoeJulian dbruhn: I kill the glusterfsd for that brick. Wipe it (usually just format the lv and re-create the "brick" directory). Then start $vol force
18:36 JoeJulian ... then heal...full
18:38 edoceo On a recent heal situation I did the find trick, but steped through maxdepth 1....N - and waited in between runs.  There were still self heal warnings and all but it seems to work better than just blasting `find` all the way.
18:39 edoceo That observation is not science, it's anecdotal (sp?)
18:39 dbruhn just exploring my options here, is there a reason the split brain report wouldn't show the files that are split-brain
18:39 dbruhn small sample of my output Number of entries: 1023
18:39 dbruhn at                    path on brick
18:39 dbruhn -----------------------------------
18:39 dbruhn 2013-02-05 04:38:37 /
18:39 dbruhn 2013-02-05 04:29:08 /
18:39 JoeJulian My guess was that they exceeded a 1024 size list.
18:40 JoeJulian The path is /
18:40 JoeJulian right?
18:41 JoeJulian Split brain directories.... I haven't figured that one out. I think it's because one of the files in that directory is split-brain.
18:41 dbruhn that's what it's reporting, but all 1023 results say that, and there are only two files at the root of the file system and 7 directories
18:41 edoceo Hey, gluster.org is showing a bunch of "Error creating thumbnail: Unable to save thumbnail to destination" errors on it's pages.
18:42 __Bryan__ I have had that issue when the actual directory entry had the issue?not a file in it
18:42 dbruhn the directory for the brick is /var/ENTV02EP
18:46 dbruhn Here is the full output for info split-brain
18:46 dbruhn http://pastie.org/7975912
18:46 glusterbot Title: #7975912 - Pastie (at pastie.org)
18:50 flrichar joined #gluster
18:53 daMaestro joined #gluster
18:57 Rhomber joined #gluster
18:58 dbruhn Is that error log saying the root of the gluster file system is what is in split-brain?
18:58 JoeJulian No, the root of the brick
19:02 bennyturns dbruhn, that is prolly https://bugzilla.redhat.com/show_bug.cgi?id=952421
19:02 glusterbot <http://goo.gl/il7mo> (at bugzilla.redhat.com)
19:02 glusterbot Bug 952421: is not accessible.
19:03 bennyturns actually https://bugzilla.redhat.com/show_bug.cgi?id=913208
19:03 glusterbot <http://goo.gl/pw3Vf> (at bugzilla.redhat.com)
19:03 glusterbot Bug 913208: medium, medium, ---, pkarampu, ASSIGNED , Split brain on / found during self heal testing.
19:04 bennyturns dbruhn, https://bugzilla.redhat.com​/show_bug.cgi?id=913208#c12 has the work around
19:04 glusterbot <http://goo.gl/urr3O> (at bugzilla.redhat.com)
19:04 glusterbot Bug 913208: medium, medium, ---, pkarampu, ASSIGNED , Split brain on / found during self heal testing.
19:05 dbruhn thank you
19:05 bennyturns dbruhn, there isn't 1024 files that are split brain, that is just as far back as the buffer goes.  Its just 1 file but the daemon updates ever 5 minutes iirc
19:06 hagarth joined #gluster
19:08 JoeJulian the "changelog extended attribute" being?
19:09 bennyturns trusted.afr.healtest-client-​3=0x000000000000000000000000
19:09 JoeJulian ok
19:10 JoeJulian I thought so, but since we don't usually call it that, I didn't want to assume...
19:10 bennyturns run getfattr -d -e hex -m "trusted.afr." <root of my brick>
19:11 JoeJulian Btw... to make life easier... You can just say bug 913208 and glusterbot will handle the rest. (if you're lazy like me)
19:11 glusterbot Bug http://goo.gl/pw3Vf medium, medium, ---, pkarampu, ASSIGNED , Split brain on / found during self heal testing.
19:12 bennyturns will do!
19:12 JoeJulian I'm glad that's the workaround, 'cause that's what I've always done. And it's not nfs related. I always have it disabled.
19:16 tziOm joined #gluster
19:18 Airbear joined #gluster
19:22 dbruhn how does one remove the changelog extended attributes?, probably a stupid question
19:23 JoeJulian You don't actually remove them, you set them to zero. ie. setfattr -n trusted.afr.healtest-client-3 -v 0x000000000000000000000000
19:24 JoeJulian But you'll use your own trusted.afr keys that you got with the getfattr command.
19:24 JoeJulian Speaking of getfattr... I'm hungry...
19:24 puebele joined #gluster
19:31 hagarth joined #gluster
19:35 Airbear_ joined #gluster
19:36 dbruhn ok I just got the same 0x000000000000000000000000 as you had there, so on the server that contains the brick I am having the problem with I run that command?
19:37 vincent_vdk joined #gluster
19:40 JoeJulian I just set 'em all to 0
19:41 bennyturns dbruhn, run that on all bricks on both servers and compare.  Any of them that are non zero should get set back to 0.  iirc you have 2 servers with 2 bricks each?  you would need to run the getfattr 4 times(once with each brick) and reset the ones that are non zero
19:41 bennyturns dbruhn, copy and past the output from getfattr -d -e hex -m "trusted.afr." <root of my brick> if you want
19:41 dbruhn This is a 3x2 setup, so I would need to run it 12 times I am assuming?
19:42 dbruhn this is from the server in question, each server has a single brick
19:42 dbruhn [root@ENTSNV02003EP ~]# getfattr -d -e hex -m "trusted.afr." /var/ENTV02EP/
19:42 dbruhn getfattr: Removing leading '/' from absolute path names
19:42 dbruhn # file: var/ENTV02EP/
19:42 dbruhn trusted.afr.ENTV02EP-client-​2=0x000000000000000000000000
19:42 dbruhn trusted.afr.ENTV02EP-client-​3=0x000000000000000000000000
19:42 JoeJulian I've got to quit using that command... Every time I type it, I think I do get fatter. That's got to be it...
19:42 bennyturns lolz
19:42 bennyturns dbruhn, that one looks OK, what about the others?
19:43 dbruhn sitting here working in stead of riding my bicycle is making me fattr
19:43 JoeJulian I've been thinking of setting up the trainer behind my desk...
19:45 dbruhn all of them are showing the same thing now
19:45 JoeJulian So theoretically there should be no new entries of it in that split-brain log
19:45 dbruhn I bought rollers last winter and can't stay up on them. I really would like to get off of my trainer, I am lazy on it
19:45 bennyturns dbruhn, hrm kk.  what about volume status split-brain?
19:46 JoeJulian I ride the 25 miles home from work every Wed. And bike-to-bus the other two days I go to the office.
19:46 dbruhn Does the split-brain output not clear once an issues been resolved.
19:46 JoeJulian no
19:46 robo joined #gluster
19:47 JoeJulian It's a log
19:47 dbruhn We office out of my house, no commute anymore, I used to do 40 everyday. I need to move offices again, lol.
19:47 dbruhn ok, well that looks good then
19:47 JoeJulian Hehe, you could just pretend
19:48 JoeJulian Every day? You're braver than I am if your reverse dns is at all accurate.
19:49 hagarth joined #gluster
19:49 dbruhn lol why is that?
19:49 dbruhn comcast?
19:49 JoeJulian Minnesota
19:51 dbruhn haha yeah, I have three bikes, one set up for long distances with a 10x3, once setup with a 9x2 geared way to low, but super fun for fast riding, and then an old 2x9 geared 12-32 with studded tires
19:53 JoeJulian http://hight3ch.com/ktrak-​from-bike-into-a-snow-toy/
19:53 glusterbot <http://goo.gl/C04of> (at hight3ch.com)
19:55 dbruhn That looks awesome? lol. I have friends riding those surely pigsty's they swear by them
19:55 chirino left #gluster
19:55 dbruhn I am just too stubborn to got heavy on my drive train, and most of our riding paths are plowed through the winter
19:55 dbruhn s/got/get
19:56 edoceo man useradd
19:57 edoceo drrr
20:00 failshell i got gitlab running off gluster. pretty slick.
20:01 rsherman joined #gluster
20:01 rsherman hi folks.  I found an install bug for glusterfs-fuse-3.4.0-0.5.beta2.el6.x86_64
20:02 rsherman the RPM has a Dependency: /usr/bin/fusermount
20:02 jag3773 joined #gluster
20:03 rsherman but the system i'm using (Amazon Linux) has the exec in /bin/fusermount
20:03 rsherman fuse-2.9.2-1.13.amzn1.x86_64
20:08 semiosis failshell: gitlab \m/
20:09 JoeJulian If I were the finger-pointing type, I would suggest it was an amazon bug. Probably should file a report at https://bugzilla.redhat.com/ente​r_bug.cgi?product=Fedora%20EPEL and let kkeithley take a look at it.
20:09 glusterbot <http://goo.gl/jLJWu> (at bugzilla.redhat.com)
20:17 lh joined #gluster
20:18 andreask joined #gluster
20:18 bennyturns rsherman, I think that may be fixed already?  Neils's commit in BZ#947830 looks to have removed that from the spec file?  Not sure why its not in the version you are using
20:22 rsherman looks like that patch was directed at glusterfs, but may not have been included in glusterfs-fuse
20:25 edoceo on geo-replicate is my Slave side a single machine or can it be a Gluster replicated volume too? - 3.3
20:26 flrichar joined #gluster
20:28 hagarth joined #gluster
20:43 Airbear joined #gluster
20:53 JoeJulian rsherman: The one spec builds all the packages.
20:53 JoeJulian edoceo: It can be a volume.
20:58 duerF joined #gluster
20:59 rb2k joined #gluster
21:21 edoceo So, my Slave1:/storage for example is just Slave1 mounting a Gluster that is Slave1, Slave2 and Slave3 and Slave4 in distribute/relplicate?
21:21 edoceo What if Slave1 is down? Should How would replication continue if Slave1 is dead?
21:27 ixmun joined #gluster
21:38 brunoleon___ joined #gluster
21:41 JoeJulian I believe it doesn't.
21:42 JoeJulian Or is slave1 your ... um....
21:42 JoeJulian @glossary
21:42 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
21:42 JoeJulian Or is slave1 your slave?
21:43 JoeJulian If your master targets a volume as the slave, it'll actually mount it locally as a client and rsync through the fuse client.
21:44 JoeJulian So if your remote "slave1" (assuming slave1 is a server for the remote volume) the georep will continue in the same way any fuse client does now.
22:25 Airbear joined #gluster
23:15 rb2k joined #gluster
23:31 StarBeast joined #gluster
23:34 theron joined #gluster
23:40 jurrien joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary