Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 StarBeast joined #gluster
00:01 JoeJulian @later tell realdannys1 Sorry, we're mostly all at Red Hat Summit this week.
00:01 glusterbot JoeJulian: The operation succeeded.
00:02 rcoup JoeJulian: so I should do that on one of the bricks?
00:03 bulde joined #gluster
00:08 rcoup JoeJulian: setfattr -x trusted.gfid path1/ path2/ path3/, then repeat for the other attributes?
00:12 primusinterpares joined #gluster
00:12 rcoup (afaict trusted.afr.storage-client-14, trusted.afr.storage-client-15, trusted.gfid & trusted.glusterfs.dht)
00:13 rcoup or do you mean something else by zeroing?
00:17 JoeJulian Just trusted.afr.*, not gfid or dht
00:18 rcoup JoeJulian: and on one of the bricks?
00:18 rcoup (the one that thinks it's split?)
00:56 dbruhn is anyone out there running gluster under openstack using the local storage on the hypervisors as the NAS/Object storage via gluster
00:57 majeff joined #gluster
01:14 awheeler joined #gluster
01:15 primusinterpares joined #gluster
01:25 JoeJulian dbruhn: I've got one of my gluster servers that's also a compute node with one of my volumes (using a local drive as one of the bricks) hosting the vm images.
01:26 dbruhn Does it work well, or would they be in contention for resources in a production environment
01:28 Hchl joined #gluster
01:29 rcoup JoeJulian: (back) did I miss a reply about where to remove those dir xattrs from?
01:32 majeff joined #gluster
01:33 larsks joined #gluster
01:35 JoeJulian rcoup: Just check 'em all.
01:36 rcoup so nuke from both the bricks? ok
01:41 primusinterpares joined #gluster
01:42 jbrooks joined #gluster
01:44 rcoup thanks JoeJulian, as usual that's helpful :)
01:53 Hchl joined #gluster
02:10 GLHMarmot joined #gluster
02:13 primusinterpares joined #gluster
02:19 dbruhn left #gluster
02:23 majeff joined #gluster
02:32 bharata joined #gluster
02:32 MrNaviPa_ joined #gluster
02:40 majeff joined #gluster
02:45 rcoup Q2 of the day
02:50 rcoup getting "gfid different" errors, followed by "no such file or directory", then "unable to lock on even on child"
02:50 rcoup https://gist.github.com/rcoup/70b9b2f​50de8d72c7caa/raw/7a529d6c570f1c3bfae​f39edfcbe91d6ba7e6924/gistfile1.txt
02:50 glusterbot <http://goo.gl/Ozh5X> (at gist.github.com)
02:51 rcoup this is during a cp onto the volume followed by a chmod
02:53 rcoup not entirely sure where the failure happens - but the cp seems to succeed to 1 brick (should be 2x - dist+repl)
02:53 rcoup but then the gluster clients report No such file or directory when trying to do the chmod (& later)
02:53 rcoup any suggestions?
02:53 RangerRick joined #gluster
02:58 lpabon joined #gluster
03:03 rcoup not sure why the log has 'stale nfs file handle', everything's connected via fuse
03:04 vshankar joined #gluster
03:09 bala joined #gluster
03:12 JoeJulian rcoup: Ah, that is, indeed, split-brain. Not the trusted.gfid is different on those two directories. If this was happening to me, I would probably delete the whole directory from one and let it self-heal, unless that was too big of a job. Then you would have to correct the gfid and make sure everything else is right.
03:12 rcoup JoeJulian: not listed in split-brain :/
03:13 JoeJulian rcoup: Ok, heading to my hotel and bed. Got these machines all ready for tomorrow's demo.
03:13 JoeJulian Hmm, that does seem odd.
03:13 rcoup seems like there's 2x different gfids
03:13 Hchl joined #gluster
03:13 rcoup across the 8x bricks but each replica pair matches
03:14 JoeJulian Now that you mention it, I think that's a known bug and is already fixed.
03:14 JoeJulian Just pick one and set them all to that gfid.
03:14 rcoup well, that's easy :)
03:14 JoeJulian See ya tomorrow.
03:14 rcoup THANKS
03:17 bala joined #gluster
03:18 brosner joined #gluster
03:20 puebele1 joined #gluster
03:20 puebele1 left #gluster
03:22 lalatenduM joined #gluster
03:23 aravindavk joined #gluster
03:28 brosner joined #gluster
03:30 mohankumar__ joined #gluster
03:40 majeff1 joined #gluster
03:46 majeff joined #gluster
03:48 hagarth joined #gluster
03:59 sgowda joined #gluster
04:03 bambi23 joined #gluster
04:09 Hchl joined #gluster
04:10 MrNaviPacho joined #gluster
04:13 dblack joined #gluster
04:23 primusinterpares joined #gluster
04:34 Hchl joined #gluster
04:39 ultrabizweb joined #gluster
04:42 deepakcs joined #gluster
04:42 vpshastry joined #gluster
04:45 chirino joined #gluster
04:46 bharata-rao joined #gluster
04:49 hajoucha joined #gluster
04:52 ultrabizweb joined #gluster
04:59 vimal joined #gluster
05:02 Hchl joined #gluster
05:18 CheRi joined #gluster
05:20 CheRi joined #gluster
05:21 Hchl joined #gluster
05:25 saurabh joined #gluster
05:34 lalatenduM joined #gluster
05:37 Hchl joined #gluster
05:50 satheesh joined #gluster
05:55 bulde joined #gluster
06:02 ricky-ticky joined #gluster
06:03 raghu joined #gluster
06:04 psharma joined #gluster
06:05 partner uh, 10 GB self-healed to new brick in 12 hours..
06:08 Hchl joined #gluster
06:12 ctria joined #gluster
06:23 ngoswami joined #gluster
06:26 jtux joined #gluster
06:26 rotbeard joined #gluster
06:26 odata joined #gluster
06:27 odata Hi
06:27 glusterbot odata: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:27 majeff joined #gluster
06:35 majeff left #gluster
06:37 odata Hello,  i have Glusterfs 3.3 running on three nodes. The nodes have their volumes mounted per iscsi. Is it save to resize de underlying LUNs, and if so how do i resize my Gluster Volume without loosing the data on it.?
06:37 guigui3 joined #gluster
06:43 rastar joined #gluster
06:45 andreask joined #gluster
06:50 ekuric joined #gluster
06:52 mooperd joined #gluster
06:59 ramkrsna joined #gluster
07:00 Hchl joined #gluster
07:03 hajoucha joined #gluster
07:09 yinyin joined #gluster
07:17 ekuric joined #gluster
07:18 glusterbot New news from newglusterbugs: [Bug 973891] cp does not work from local fs to mounted gluster volume; <http://goo.gl/yKVT8> || [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
07:22 dobber_ joined #gluster
07:27 aravindavk joined #gluster
07:36 bala joined #gluster
07:49 FilipeMaia joined #gluster
07:50 rb2k joined #gluster
07:56 rastar joined #gluster
08:03 mooperd joined #gluster
08:12 MrNaviPacho joined #gluster
08:14 StarBeast joined #gluster
08:14 nightwalk joined #gluster
08:15 Hchl joined #gluster
08:21 yinyin joined #gluster
08:26 hchiramm__ joined #gluster
08:32 FilipeMaia joined #gluster
08:35 Hchl joined #gluster
08:39 ujjain joined #gluster
08:45 jbrooks joined #gluster
08:46 Norky joined #gluster
08:49 satheesh joined #gluster
09:08 mooperd joined #gluster
09:11 bala joined #gluster
09:11 odata Hello,  i have Glusterfs 3.3 running on three nodes. The nodes have their volumes mounted per iscsi. Is it save to resize de underlying LUNs, and if so how do i resize my Gluster Volume without loosing the data on it.?
09:14 Norky assuming you're using LVM for your bricks, yes, it's fairly simple
09:14 odata ok thanks
09:14 odata do i have to take the bricks offline an resync the volume or do i just resize one LUn after the other ?
09:15 Norky I've done it (with direct-attached storage rather than iSCSI), think I did it without any downtime
09:15 Norky no, growing bricks shoudl be recognised immediately
09:15 Norky clients can cache the old size and not be immediately awaer of the new size
09:15 foxban joined #gluster
09:15 Norky aware*
09:15 foxban hi guys
09:16 odata that sounds great so maybe i could use the autogrow feature of my underlying storage system...
09:16 Norky odata, (how) are you using LVM?
09:16 odata ah uh im not using LVM sorry for that
09:16 Norky ahh
09:16 foxban I couldn't get failover of native client working in two node enviroment, any one could help? thx
09:16 odata i have a few nas systems
09:16 Norky it will be more diofficulat then
09:17 Norky difficult*
09:17 odata hmm why?
09:17 Norky LVM makes storage allocation more flexible
09:17 foxban I configured the service according to the quick start
09:17 odata ok
09:17 odata i see
09:17 Norky not having LVM makes it less flexible
09:18 foxban when one node is halted, I got "Transport endpoint is not connected" error on the client
09:18 odata i have  iscsi volumes mounted on every brick
09:18 Norky foxban, what kind of gluster volume do you have?
09:18 odata the nas system can grow the LUNs on the fly
09:18 Norky odata, exactly how is the storage set up?
09:18 odata i should configure LVM on the bricks then?
09:18 foxban Volume Name: pubbak_test
09:18 foxban Type: Replicate
09:18 foxban Volume ID: d0965f6d-b9b2-4763-a50c-62163d41be16
09:18 foxban Status: Started
09:18 foxban Number of Bricks: 1 x 2 = 2
09:19 foxban Transport-type: tcp
09:19 foxban Bricks:
09:19 foxban Brick1: node1:/export/sdl1/brick
09:19 foxban Brick2: node2:/export/sdl1/brick
09:19 foxban gluster>
09:19 foxban 2 node replica set
09:19 Norky well, it's a bit late really, setting up LVM now will be even more work :)
09:19 odata its a three node cluster with an iscsi volume mounted on every brick , filesystem on the volumes is xfs
09:20 Norky okay, foxban, that should work. In future put multi-line pastes in a paste bin please
09:20 odata i will do that on future installations, but for now i have to resize the actual  volume
09:20 foxban ok
09:20 Norky odata, are you partitioning your iSCSI disks, or is XFS going directly on /dev/sdX ?
09:21 odata one partition per volume
09:21 foxban I mounted node1 on the client, and when I reboot node2, I got the transport endpoint is not connected error...
09:21 Norky odata, so you have /dev/sdX1 for each brick?
09:22 odata yep
09:22 realdannys1 joined #gluster
09:24 Norky okay, assuming you have backups (if this goes wrong you can lose data), make the iSCSI disks bigger. Then on the gluster server use a partition tool such as parted or fdisk  to *remove* the existing partition (make a careful note of the starting unit) and create a new one with *exactly* the same start point, and the end point at the end of the disk
09:24 Norky the tool will likely tell you that it cannot update the kernel's idea of the partition table (because it is in use), so you'll want to reboot
09:24 Norky after reboot, run xfs_growfs on the FS
09:25 foxban Norky, I got 1 peers when excuting `peer status`, is it right?
09:25 Norky foxban, yes, each node will not list itself as a peer
09:26 foxban Norky, I tried killing gluster processes on node2 again, and still getting endpoint error, fail-over seems not working
09:26 odata thats how i grow xfs volumes on standard volumes often. This should work i just wonder what my Glusterfsclients do if there comes brck back with a greater volume than before?
09:27 Norky odata, they shoudl just work
09:28 hybrid5121 joined #gluster
09:28 Norky as I said, if this happens online, then the client might take a little while to notice the increase size, if it had that cached
09:30 odata ok 'ill give it a try, I have Backups, Volume Size ist approx 5TB this seems a good Job for the weekend.
09:30 Norky "transport end point is not connected" is an error form the FUSE layer
09:30 odata Thanks a lot for the help Norky
09:30 Norky odata, you can do it one server or brick at a time
09:30 Norky perhaps test witha  small test volume first
09:30 odata yeah so the Volume should always stay online and the cllient want have any problems
09:31 odata because it's amirrored volume
09:31 Norky most of all, make very sure you get the partition starting point correct when making a 'new' one
09:31 odata yeah thats a serious point
09:32 Norky make a couple of bricks of 500MB (if you can allocate that from your NAS system), amek a voluem from them and try expanding to 1GB each
09:33 odata yes i will test it before
09:33 bivak hello, I have a questions. I've setup a Distributed-Replicate volume with 2 GlusterFS nodes on Amazon EC2. I try to simulate a failure of 1 brick on 1 GlusterFS node. Let's say the whole volume is gone, and I have it replaced by a brand new volume which is empty (and XFS formatted). Shouldn't GlusterFS be able to heal the volume and sync all contents back from the brick that is still available on the other node which doesn't have a disk failure?
09:33 odata i was just kidding, i don't go for the production data at the first try
09:34 jones_d joined #gluster
09:34 Norky :)
09:34 odata i just wondered if it is possible at all or if i have to ad bricks to the setup ...
09:34 odata *add*
09:35 Norky odata, no, you can expand existing bricks
09:36 Norky it is also possible to, if you have sufficient space on your NAS, to create an entirely new brick, then do a replace-brick operation, swapping the larger, new brick in for the older smaller one
09:37 CheRi joined #gluster
09:37 Norky it'll take time to self-heal all the data over
09:37 Norky but you shoudl not need to reboot
09:37 Norky the two diffierent mthods ahve advantages and isadvantages
09:37 Norky foxban, sorry, I dont[' think I know enough to help more in your case
09:38 foxban I will consult the maillist, thanks anyway,
09:39 odata Norky: the second method (swapping bricks) sounds great... didn't think about that!
09:40 Norky odata, it will take longer... depending of course on the saize and number of files you have on each brick
09:41 Norky and that way, you can use LVM for the new bricks, which will make this kind of thing easier in the future....
09:42 odata thats the point. This is the reason iwhy i thin about that method
09:42 Norky just do a pvcreate on the 'raw' disk, /dev/sdX
09:42 Norky what distribution/version are you running on?
09:43 odata yeah the reason i didn't use LVM is becaus i was "testing" Glusterfs and whooosh ... all is productive :(
09:43 odata cent os 6
09:44 Norky modern enough that it'll handle resized block devices fairly well :)
09:46 odata yes this should not be a problem, as i said i did that on standard volumes in my virtual environment very often :)
09:48 rb2k_ joined #gluster
09:50 rastar joined #gluster
09:51 odata Norky: thanks alot for your help gtg now
09:57 Hchl joined #gluster
10:05 psharma joined #gluster
10:10 odata left #gluster
10:14 satheesh joined #gluster
10:25 mgebbe joined #gluster
10:37 FilipeMaia I think I'm still seeing this issue https://bugzilla.redhat.com/show_bug.cgi?id=884280 even with 3.3.2 when doing ls on a distributed ext4 volume
10:37 glusterbot Bug 884280: unspecified, unspecified, ---, vbellur, NEW , distributed volume - rebalance doesn't finish - getdents stuck in loop
10:46 satheesh joined #gluster
10:50 edward1 joined #gluster
10:52 rb2k_ joined #gluster
10:56 raj_ joined #gluster
11:03 aravindavk joined #gluster
11:18 rastar joined #gluster
11:23 psharma joined #gluster
11:32 aravindavk joined #gluster
11:36 realdannys1 joined #gluster
11:38 yinyin_ joined #gluster
11:39 tziOm joined #gluster
11:43 rastar joined #gluster
11:51 semiosis :O
12:01 charlescooke_ joined #gluster
12:02 glusterbot joined #gluster
12:08 realdannys1 Semiosis: you there?
12:09 realdannys1 Jbrooks was helping on the Serverfault page - said he tried it with two Centos instances like that and got it working - Im waiting to see the exact AMI he booted with so I can try the same one he had to disable IPtables but it worked, however Ive tried that and no luck. Which AMI are you using successfully?
12:15 semiosis I use Ubuntu, their official amis
12:27 jbrooks joined #gluster
12:28 jbrooks joined #gluster
12:42 aravindavk joined #gluster
12:53 realdannys1 I'll try a Ubuntu ami for the server, can't change my client one as its all well setup and going but we'll see if it'll connect to Ubuntu as a new test
12:59 aliguori joined #gluster
13:00 RangerRick2 joined #gluster
13:02 aravindavk joined #gluster
13:03 yinyin_ joined #gluster
13:03 realdannys1 Semiosis: Ubuntu 12 or 13?
13:04 semiosis Precise
13:04 realdannys1 jbrooks: check the updates on our server fault if you get a chance, I want to copy your setup that worked exactly AMI's, EBS settings, format settings, security settings etc
13:04 realdannys1 semiosos: want to get it right! :)
13:05 realdannys1 semiosis: *
13:05 bala joined #gluster
13:08 arj_ joined #gluster
13:09 arj_ hi, the images on the community seem to be a bit broken
13:09 arj_ s/community/community wiki/
13:09 glusterbot What arj_ meant to say was: hi, the images on the community wiki seem to be a bit broken
13:09 arj_ http://gluster.org/community/documen​tation/index.php/GlusterFS_Concepts
13:09 glusterbot <http://goo.gl/LvkRA> (at gluster.org)
13:10 arj_ indeed! ^5 glusterbot
13:27 chirino joined #gluster
13:31 ctria joined #gluster
13:31 RWOverdijk joined #gluster
13:41 brosner joined #gluster
13:45 bugs_ joined #gluster
13:56 arj_ and now I broke it :(
14:08 joelwallis joined #gluster
14:10 jruggiero joined #gluster
14:12 brosner joined #gluster
14:15 bsaggy joined #gluster
14:15 hagarth joined #gluster
14:16 bsaggy With glusterfs 3.1, the self heal on replicate guide says to run command: # find <gluster-mount> -print0 | xargs --null stat >/dev/null
14:16 bsaggy Can someone explain to me exactly what that does?
14:18 stickyboy bsaggy: The "find" command lists all your files, ie 'find /tmp' will print all the files in /tmp/
14:19 stickyboy The results of the find command are piped to xargs, which triggers a "stat" call on the files.
14:19 stickyboy The "stat" forces gluster to go find the location of the file
14:20 stickyboy Basically, to resolve its location, and fix it if it's not where it should be, or if it's out of date.
14:21 bsaggy Ok, so basically if files are missing on brick2, this would "refresh" the file on brick1 so that gluster copies it to brick2 in a replicated environment?
14:21 stickyboy Well, you run the 'find' command on your gluster FUSE mount.
14:21 bsaggy Which is the client, right?
14:21 stickyboy So it should trigger a heal if one is needed.
14:22 stickyboy bsaggy: Let's just say "FUSE mount".  Forget client/server :P
14:23 bsaggy What exactly is FUSE mount?
14:24 mohankumar__ joined #gluster
14:24 bsaggy the device which is has a mount point to the cluster?
14:24 bsaggy has, not is.
14:26 stickyboy bsaggy: FUSE is the type of mount point.  ie,   `mount -t glusterfs`
14:26 stickyboy Did you use that?
14:27 stickyboy You can call this "client", sure.
14:27 bsaggy Gotcha, Yea that was used.  Obviously you can tell I didn't set this up initially :P
14:28 vpshastry joined #gluster
14:29 bsaggy I had some issues with the cluster yesterday, and it seems a self heal is in order.  I'll give this a try, thanks for explaining!
14:37 bsaggy w
14:40 realdannys1 joined #gluster
14:48 hagarth joined #gluster
14:52 chirino joined #gluster
14:55 vpshastry left #gluster
14:55 social__ I have stupid question: I have a lot of 1kb big files. I rewrite them randomly. I have these different gluster setups: 4 nodes with distributed replicate 2 and 4 nodes with striped 2 replicated 2. How so that striped volume seems to be twice as fast? It shouldn't matter as I have too small files or does it somehow?
14:58 pkoro joined #gluster
14:58 joelwallis joined #gluster
14:59 yinyin_ joined #gluster
15:24 jthorne joined #gluster
15:27 zaitcev joined #gluster
15:41 guigui3 left #gluster
15:42 FilipeMaia joined #gluster
15:43 puebele1 joined #gluster
15:43 puebele1 left #gluster
15:45 ctria joined #gluster
15:47 tjikkun joined #gluster
15:47 tjikkun joined #gluster
15:48 sticky|away joined #gluster
15:48 stickyboy joined #gluster
15:49 JoeJulian arj_: Thanks. I think I still have those images somewhere. I don't know why they're gone but I'll re-upload them.
15:49 JoeJulian bsaggy: btw... 3.3+ doesn't need that find command.
15:51 JoeJulian social__: Interesting. Striped without replication will be faster than any replicated volume, but it shouldn't really be all that fast. If they're all 1k files, they're all going to be only on the first brick. See ,,(stripe))
15:51 bsaggy JoeJulian: Yea I think I read 3.3 is auto self healing, right?
15:52 JoeJulian @stripe
15:52 JoeJulian bsaggy: yes
15:52 JoeJulian @factoids search stripe
15:52 waldner joined #gluster
15:52 waldner joined #gluster
15:52 DEac- joined #gluster
15:53 bsaggy Is it a complex task to upgrade? Just off the top of your head...
15:53 JoeJulian oh... glusterbot's missing... gah!
15:53 JoeJulian It's not that complex, but it does involve downtime.
15:54 bsaggy Alright, I figured it would.
15:56 gmcwhistler joined #gluster
15:57 glusterbot joined #gluster
15:57 RobertLaptop joined #gluster
15:57 jim` joined #gluster
15:59 JoeJulian ~stripe | social__
15:59 glusterbot social__: Please see http://goo.gl/5ohqd about stripe volumes.
16:04 tziOm joined #gluster
16:17 * JoeJulian needs to file a bug
16:17 glusterbot http://goo.gl/UUuCq
16:26 ramkrsna joined #gluster
16:29 vpshastry joined #gluster
16:33 vpshastry left #gluster
16:42 rotbeard joined #gluster
16:54 lbalbalba joined #gluster
16:55 glusterbot New news from newglusterbugs: [Bug 974624] remote-host switch allows an untrusted host to manage the trusted pool <http://goo.gl/jEBjq>
16:58 lbalbalba worksasdesigned lol
17:01 lbalbalba 'gluster --remote-host=', is supposed to be permitted to run from a node thats not in the trusted pool
17:04 lbalbalba preventing 'peer probe' from being allowed with '--remote-host=' may solve this specific bug, but im sure theres other commands you may not want to be allowed to run from a (untrusted) node thats not in the prusted pool
17:13 dewey joined #gluster
17:15 Hchl joined #gluster
17:15 aravindavk joined #gluster
17:20 realdannys1 @rpm
17:22 glusterbot realdannys1: The official community glusterfs packges for RHEL/CentOS/SL (and Fedora 17 and earlier) are available here http://goo.gl/s077x
17:26 MrNaviPa_ joined #gluster
17:29 bms_ joined #gluster
17:36 bms_ I have cluster with 4 peers. Have 2 volumes x and xr. xr has 2 bricks and x has 1. Trying to add second brick to x. add-brick says host not connected. All peers can ping each other. I see packets on port 24007. Have restarted all hosts. Any thing else I can do?
17:39 realdannys1 FINALLY! ITS MOUNTED!!!
17:40 arj_ left #gluster
17:41 glusterbot joined #gluster
17:42 realdannys1 Thanks to semoisis: and jbrooks: and kkeithley: for help this week. What a night mare. Jason used the official Centos 6.4 AMI, for some reason it just works with that, thats two other images I had issues with previous. Someone else had a similar problem when using different versions of Gluster so hopefully this answer will go live on Serverfault and anyone in the future having issues on EC2 will come across at and try different images first.
17:43 realdannys1 One question before I start playing around - I need FFmpeg on the server with gluster on - theres no reason ffmpeg or its dependencies to install would clash with it is there?
17:48 cicero realdannys1: you should just do yum deplist
17:48 cicero or some way to list the dependencies of however you installed gluster and however you plan to install ffmpeg
17:48 realdannys1 I plan to install FFmpeg with this RPM to save time
17:49 realdannys1 but in theory they should sit ok on the same instance together?
17:49 cicero rpm -qpR, the internet says
17:49 cicero will probably be fine
17:50 realdannys1 Cool its just I've had FFmpeg running on the two installations before one I compiled from scratch and couldn't' get gluster working - I don't think it was due to ffmpeg but worth checking
17:53 glusterbot joined #gluster
18:05 nordac joined #gluster
18:10 Hchl joined #gluster
18:19 aravindavk joined #gluster
18:20 brosner joined #gluster
18:34 pkoro joined #gluster
18:50 MrNaviPacho joined #gluster
19:16 Eco_ joined #gluster
19:17 FilipeMaia joined #gluster
19:28 atrius joined #gluster
19:38 atrius joined #gluster
19:47 andreask joined #gluster
19:48 andreask joined #gluster
19:53 atrius joined #gluster
19:54 larsks When running GlusterFS on a device with lots of disks, does one generally use RAID for fault tolerance on the local disks in addition to Gluster replication?
19:55 larsks That is, assuming I am creating Gluster volumes with 2 replicas, do I need (size * 2) physical disk space or (size * 4) (assuming RAID1)?
19:57 cicero i mean that depends on you
19:57 cicero and what kind of failure scenarios you want to deal with
19:58 cicero if it's 1 drive per brick (no RAID) then if it dies, you have to replace it and do a repair with gluster
19:58 cicero if it's a RAID1 config and you have 2 drives per brick, and one of those dies, then it's a RAID recovery
19:59 Eco_ joined #gluster
20:01 larsks cicero: Understood.  I'm partly looking for whether there are recommended ways of setting things up.
20:02 larsks One could in theory expose the drives as JBOD to the host, put a filesystem on each one, and have lots of bricks on the local host...although as I understand it that would generally not be a good idea.
20:15 jtriley joined #gluster
20:15 Hchl joined #gluster
20:16 chirino joined #gluster
20:24 FilipeMaia joined #gluster
20:26 nordac Has anyone seen bad file descriptor problems using gluster as a NFS server and the same store being geo-replicated? Writing 1G files just seems to stop about 1/2 way.
20:27 nordac The minute I turn off geo-rep everything is fine
20:31 a2 we fixed that issue
20:31 a2 let me pull out the commit id
20:35 tru_tru joined #gluster
20:37 nordac awesome... this has been killing me
20:38 nordac I was just about to try out 3.4beta
20:48 FilipeMaia joined #gluster
21:01 jbrooks_ joined #gluster
21:12 FilipeMaia joined #gluster
21:22 hagarth joined #gluster
21:45 FilipeMaia joined #gluster
21:48 realdannys1 joined #gluster
21:51 puebele1 joined #gluster
21:51 puebele1 left #gluster
22:06 joelwallis joined #gluster
22:12 duerF joined #gluster
22:16 y4m4 joined #gluster
22:25 ultrabizweb joined #gluster
22:40 duerF joined #gluster
22:55 Hchl joined #gluster
23:24 Hchl joined #gluster
23:30 y4m4 joined #gluster
23:33 rb2k joined #gluster
23:49 Hchl joined #gluster
23:54 duerF joined #gluster
23:55 bulde joined #gluster
23:56 glusterbot New news from newglusterbugs: [Bug 973891] cp does not work from local fs to mounted gluster volume; <http://goo.gl/yKVT8> || [Bug 974624] remote-host switch allows an untrusted host to manage the trusted pool <http://goo.gl/jEBjq> || [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary