Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:28 yinyin joined #gluster
00:49 _pol joined #gluster
00:57 robo joined #gluster
01:09 sahina joined #gluster
01:25 rabbitt joined #gluster
01:29 bala joined #gluster
01:42 kevein joined #gluster
01:43 zhashuyu joined #gluster
02:08 glusterbot New news from resolvedglusterbugs: [Bug 920890] Improve sort algorithm in dht_layout_sort_volname. <http://goo.gl/IHu2r>
02:14 jules_ joined #gluster
02:16 jdarcy joined #gluster
02:30 bala joined #gluster
02:47 edong23 joined #gluster
02:51 cyberbootje joined #gluster
02:58 nhm_ joined #gluster
03:01 vshankar joined #gluster
03:03 hagarth joined #gluster
03:08 bstansell_ joined #gluster
03:16 duffrecords joined #gluster
03:29 kevein joined #gluster
03:45 vpshastry joined #gluster
03:47 anmol joined #gluster
03:52 bulde joined #gluster
04:07 sripathi joined #gluster
04:13 saurabh joined #gluster
04:19 sgowda joined #gluster
04:20 hagarth joined #gluster
04:45 lalatenduM joined #gluster
04:46 shylesh joined #gluster
05:02 lalatenduM joined #gluster
05:05 sgowda joined #gluster
05:10 disarone joined #gluster
05:23 aravindavk joined #gluster
05:28 aravindavk joined #gluster
05:28 sahina joined #gluster
05:29 disarone joined #gluster
05:29 hateya joined #gluster
05:31 raghu joined #gluster
05:37 Humble joined #gluster
05:38 Humble joined #gluster
05:43 sag47 joined #gluster
05:43 65MAAPL8K joined #gluster
05:56 vshankar joined #gluster
06:01 rastar joined #gluster
06:02 bulde joined #gluster
06:03 hagarth joined #gluster
06:08 stickyboy joined #gluster
06:11 niv joined #gluster
06:14 _br_ joined #gluster
06:16 stickyboy I've set up a new GlusterFS deployment with 3 10TB bricks.  Is there any rule of thumb to brick sizes?  My machines have ~30TB raw storage.
06:18 satheesh joined #gluster
06:22 wenzi joined #gluster
06:23 vimal joined #gluster
06:25 mohankumar joined #gluster
06:26 vpshastry joined #gluster
06:27 niv joined #gluster
06:30 _br_ joined #gluster
06:32 displaynone joined #gluster
06:40 rastar1 joined #gluster
06:41 vpshastry joined #gluster
06:42 sgowda joined #gluster
06:42 raghu joined #gluster
06:42 deepakcs joined #gluster
06:45 sripathi joined #gluster
06:46 sripathi joined #gluster
06:48 saurabh shanks, ping
06:55 ngoswami joined #gluster
06:56 bulde joined #gluster
07:09 glusterbot New news from resolvedglusterbugs: [Bug 826512] [FEAT] geo-replication checkpoint support <http://goo.gl/O6N3f> || [Bug 830497] [FEAT] geo-replication failover/failback <http://goo.gl/XkT0F>
07:15 jtux joined #gluster
07:35 guigui1 joined #gluster
07:37 ekuric joined #gluster
07:39 glusterbot New news from resolvedglusterbugs: [Bug 855306] geo-replication with large number of small static files will have E2BIG errors in logs. <http://goo.gl/WsEnF>
07:41 kevein joined #gluster
07:41 Nevan joined #gluster
07:46 sripathi joined #gluster
07:49 vpshastry joined #gluster
07:53 jtux joined #gluster
07:54 sripathi joined #gluster
07:55 ctria joined #gluster
08:09 glusterbot New news from resolvedglusterbugs: [Bug 879536] Add use-rsync-xattrs to geo-replication <http://goo.gl/kwNbB>
08:12 alex88 hi guys, is there a way to change the user that owns the files in the mounted folder?
08:12 alex88 since somewhere I need www-data, somewhere else I need another user and so on
08:15 vimal joined #gluster
08:16 samppah alex88: different user for same data?
08:16 alex88 samppah: yup.. on different clients ofc
08:17 Alpinist joined #gluster
08:21 eiki joined #gluster
08:23 rotbeard joined #gluster
08:26 alex88 maybe fuse has such options
08:26 vimal joined #gluster
08:27 aravindavk joined #gluster
08:27 alex88 mmhh, seems I can't use fuse options, can I?
08:28 tjikkun_work joined #gluster
08:30 vimal joined #gluster
08:32 andreask joined #gluster
08:33 samppah alex88: i think there used to be glusterfs translator that allowed uid mapping but i'm not sure if it's still available
08:33 samppah alex88: is it possible just to use same uid on different clients?
08:33 alex88 samppah: actually I want to use www-data on frontend and another user on processing machines
08:34 alex88 but I can still use www-data for that job too
08:34 alex88 I've tried with nfs, mount -t nfs -o mountproto=tcp,uid=33,guid=33 storage.site.com:/site-development /mnt/site-development/ but I get mount.nfs: No such device
08:34 alex88 since nfs gives the options to set uid and guid
08:38 ProT-0-TypE joined #gluster
08:45 eryc joined #gluster
08:45 eryc joined #gluster
08:48 ThatGraemeGuy joined #gluster
08:49 andreask joined #gluster
08:50 sripathi joined #gluster
08:51 ThatGraemeGuy in the past i've always created glusterfs volumes on top of a dedicated disk/partition. is there any technical reason i should create a volume on top of a subdir of an already-used filesystem, e.g. the root filesystem? i have a use case which needs a miniscule amount of space, it seems silly to create a dedicated virtual disk/partition when i can just make a subdir in the root filesystem
08:51 ThatGraemeGuy um, "is there any technical reason i SHOULDN'T create a volume on top of a subdir of an already-used filesystem"
08:51 ThatGraemeGuy sorry
08:55 samppah ThatGraemeGuy: there is currently bug in ext4 that makes glusterfs unusable on ext4 partitions.. of course that's not problem if your root partition isn't ext4
08:55 stickyboy ThatGraemeGuy: I mount my raw storage bricks as /mnt/gfs/server0/sda1/<volname>
08:55 samppah also keep in mind that if your gluster volume grows out of space it will also affect your root partition
08:56 morse joined #gluster
08:56 ThatGraemeGuy samppah, that's odd, i have lots of volumes on top of ext4, what's the exact bug?
08:56 ThatGraemeGuy samppah, this particular use case needs less than 100MB and it isn't going to grow
08:57 vimal http://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/
08:57 glusterbot <http://goo.gl/PEBQU> (at joejulian.name)
08:58 Alpinist joined #gluster
08:59 ThatGraemeGuy thanks
09:00 mooperd joined #gluster
09:01 hagarth @channelstats
09:01 glusterbot hagarth: On #gluster there have been 101798 messages, containing 4440623 characters, 746060 words, 3037 smileys, and 381 frowns; 688 of those messages were ACTIONs. There have been 36254 joins, 1176 parts, 35095 quits, 14 kicks, 109 mode changes, and 5 topic changes. There are currently 210 users and the channel has peaked at 213 users.
09:01 ThatGraemeGuy ah, most of my volumes are still on ubuntu 10.04 boxes, which predate that particular ext4 change
09:02 ThatGraemeGuy still good to know, thanks
09:05 stickyboy ThatGraemeGuy: Yah, that ext4 thing is ugly.  Took me a few days in my new GlusterFS deployment this week to find out that was my issue. :P
09:09 ThatGraemeGuy it seems to only have broken after kernel 3.2.9. ubuntu's latest LTS is still on 3.2.0, so I'll likely be OK until the new LTS next year
09:09 stickyboy That bug made it into long-term stable 2.6.32 as well, so RHEL / CentOS 6.x :)
09:09 stickyboy Yay
09:09 ThatGraemeGuy in the meantime i've informed the rest of my team that future volumes are to built on top of xfs instead
09:09 stickyboy We switched to XFS.
09:09 stickyboy Yah
09:10 ThatGraemeGuy we're not super hardcore users of glusterfs, but still nice to come across this now rather than later :)
09:12 stickyboy ThatGraemeGuy: Totally
09:20 raj_ joined #gluster
09:23 morse joined #gluster
09:26 ngoswami joined #gluster
09:29 stickyboy What kind of CPUs are people using on their new-ish GlusterFS deployments?
09:30 stickyboy Nehalem?  Westmere?  Sandy Bridge?
09:31 vpshastry joined #gluster
09:39 sripathi joined #gluster
09:53 Norky Westmere in one case, stickyboy
09:59 tryggvil joined #gluster
10:00 sripathi1 joined #gluster
10:03 rastar joined #gluster
10:11 go2k joined #gluster
10:16 Uguu joined #gluster
10:19 hchiramm_ joined #gluster
10:19 stickyboy Norky: Cool thanks.
10:19 stickyboy I was just noticing that I probably went overkill on my recent storage boxes.  Xeon 2643.  Quad-core Sandy Bridge.
10:20 inodb_ joined #gluster
10:20 dmojoryder joined #gluster
10:20 klaxa_ joined #gluster
10:20 kbsingh_ joined #gluster
10:20 klaxa joined #gluster
10:20 hchiramm_ joined #gluster
10:20 mtanner_w joined #gluster
10:20 samppah joined #gluster
10:20 stickyboy Now I'm also wondering if skipping RAID controllers is a good idea.  ie, RAID5 underneath versus JBOD?
10:20 sripathi joined #gluster
10:20 klaxa joined #gluster
10:20 ThatGraemeGuy joined #gluster
10:20 brunoleon joined #gluster
10:20 edong23_ joined #gluster
10:20 klaxa_ joined #gluster
10:20 foster joined #gluster
10:21 klaxa joined #gluster
10:21 Kins joined #gluster
10:21 klaxa joined #gluster
10:22 klaxa joined #gluster
10:22 klaxa joined #gluster
10:23 klaxa joined #gluster
10:23 klaxa joined #gluster
10:23 jclift joined #gluster
10:23 samppah i'm trying to activate geo-replication on volume that already has few vm images in it.. problem is that i don't see any data being transferred to slave
10:24 klaxa joined #gluster
10:24 klaxa joined #gluster
10:24 vpshastry joined #gluster
10:24 samppah geo-replication status says N/A and master seems to be doing something because load is rising
10:25 klaxa joined #gluster
10:26 JoeJulian samppah: Probably building the marker data.
10:26 klaxa joined #gluster
10:26 klaxa joined #gluster
10:27 klaxa joined #gluster
10:27 klaxa joined #gluster
10:28 klaxa joined #gluster
10:28 samppah JoeJulian: okay, any idea how long it can take and what's it actually doing?
10:28 klaxa joined #gluster
10:28 samppah there isn't much information available about geo-rep :(
10:28 JoeJulian none
10:29 klaxa joined #gluster
10:29 JoeJulian I'm even guessing at the marker bit...
10:29 klaxa joined #gluster
10:30 klaxa joined #gluster
10:30 samppah it sounds it's possible.. i'll leave it running and see later what's going on
10:30 klaxa joined #gluster
10:31 klaxa joined #gluster
10:31 klaxa joined #gluster
10:32 klaxa joined #gluster
10:32 klaxa joined #gluster
10:32 jdarcy joined #gluster
10:32 ninkotech_ joined #gluster
10:33 klaxa joined #gluster
10:33 BSTR joined #gluster
10:33 klaxa joined #gluster
10:34 klaxa joined #gluster
10:35 klaxa joined #gluster
10:35 klaxa joined #gluster
10:36 klaxa joined #gluster
10:36 klaxa joined #gluster
10:37 klaxa joined #gluster
10:37 manik joined #gluster
10:37 klaxa joined #gluster
10:38 klaxa joined #gluster
10:38 klaxa joined #gluster
10:39 klaxa joined #gluster
10:39 klaxa joined #gluster
10:40 klaxa joined #gluster
10:41 klaxa joined #gluster
10:45 aravindavk joined #gluster
10:50 wrale joined #gluster
10:52 GLHMarmot joined #gluster
10:55 manik joined #gluster
10:56 nueces joined #gluster
11:20 stickyboy joined #gluster
11:23 manik joined #gluster
11:39 vpshastry joined #gluster
11:44 joeto joined #gluster
11:45 manik joined #gluster
11:52 stickyboy When I change volume options, do I have to restart glusterd?
11:53 ThatGraemeGuy using "gluster volume set ..."? not as far as I know
11:57 vshankar joined #gluster
12:09 plarsen joined #gluster
12:15 jdarcy joined #gluster
12:17 baz_ joined #gluster
12:24 bennyturns joined #gluster
12:38 mynameisdeleted how much cpu should glusterd or glusterfsd take?
12:40 glusterbot New news from resolvedglusterbugs: [Bug 915329] Crash in glusterd <http://goo.gl/abe93>
12:48 mynameisdeleted https://bugzilla.redhat.com/show_bug.cgi?id=919352
12:48 glusterbot <http://goo.gl/i23kf> (at bugzilla.redhat.com)
12:48 glusterbot Bug 919352: unspecified, unspecified, ---, kparthas, MODIFIED , glusterd segfaults/core dumps on "gluster volume status ... detail"
12:48 mynameisdeleted this crash bug only happens on xfs?
12:48 mynameisdeleted is ext4 better?
12:48 flrichar joined #gluster
12:49 mynameisdeleted is ext4 more stable?
12:50 JoeJulian @google glusterfs ext3
12:50 JoeJulian @google glusterfs ext4
12:50 glusterbot JoeJulian: How to check/enable extended attributes in Ext3 - Gluster ...: <http://goo.gl/VGg73>; [Gluster-users] Recommended underlining disk storage environment: <http://goo.gl/TDkZY>; GlusterFS Replication for Clustering » Source Allies Blog:
12:50 glusterbot JoeJulian: <http://goo.gl/UQNCX>; Introduction to Gluster - Think88: <http://goo.gl/7WAZT>; Gluster Fs: <http://goo.gl/dJnKf>; Playing with NFS & GlusterFS on Amazon cc1.4xlarge EC2 instance ...: <http://goo.gl/JcGLI (1 more message)
12:50 glusterbot JoeJulian: GlusterFS bit by ext4 structure change | Gluster Community Website: <http://www.gluster.org/category/rants-raves/>; Gluster 3.1: Checking GlusterFS Minimum Requirements ...: <http://goo.gl/ZekRJ>; Question: Gluster & ext4 issues: <http://goo.gl/9oYv2>;
12:50 glusterbot JoeJulian: [Gluster-users] Migrate bricks from ext4 to xfs: <http://goo.gl/v8ZAB>; GlusterFS bit by ext4 structure change: <http://goo.gl/PEBQU>; Cluster - Ext4 considerations - BHL project planner: (1 more message)
12:55 manik joined #gluster
12:56 ninkotech_ joined #gluster
12:57 nhm joined #gluster
12:59 morse joined #gluster
13:07 stickyboy ThatGraemeGuy: Ok, I had set an auth.reject on a volume and it isn't working.
13:09 mynameisdeleted redhat recomends ext4 over xfs for most linux distros for a reason
13:09 mynameisdeleted and they arent the ones to favor a faster filesystem for servers over a safer one
13:18 JoeJulian I'm not sure where you're getting your information. Not from the Kernel File and Storage Team Senior Manager & Architect, nor from their support staff. I could just be tired, but it sounds like you're trying to start the filesystem argument.
13:19 robos joined #gluster
13:20 mynameisdeleted centos6 installer gui
13:20 mynameisdeleted maybe thats outdated info
13:20 sahina joined #gluster
13:20 mynameisdeleted but redhat enterprise 6 and centos installer both say its not recommended to use xfs for root filesystem.. not sure their logic
13:21 JoeJulian It had to do with the grub support for xfs when they shipped 6.0.
13:21 mynameisdeleted ahhh
13:21 mynameisdeleted ok
13:22 mynameisdeleted and they didn't fix that when you have seperate /boot partition
13:22 plarsen joined #gluster
13:22 JoeJulian btw, you might want to re-check your performance comparisons again, too. The old multi-threaded io performance bottleneck that used to hit xfs is long gone.
13:22 mynameisdeleted I could understand grub and /boot recomendations being an issue.. hence I use ext2 for /boot normally
13:23 StucKman joined #gluster
13:24 JoeJulian They way I read it was that the grub developers stopped improving grub when they realized they were going in the direction of grub2
13:25 jdarcy joined #gluster
13:25 mynameisdeleted ahh
13:25 mynameisdeleted grub2 is neater
13:25 mynameisdeleted I can install it in bios portion of all drives
13:25 mynameisdeleted and use the bios image of it with pre-included xfs, lvm, raid5, etc modules
13:26 mynameisdeleted so I can use 1 partition in raid5/6 on each drive
13:26 mynameisdeleted and it boots
13:28 jtux joined #gluster
13:28 mynameisdeleted I didnt like grub2 at first but now I do like it
13:29 sjoeboo_ joined #gluster
13:29 nueces @google glusterfs xfs
13:29 nueces @google glusterfs xfs
13:29 glusterbot nueces: Whats the staus of XFS support? - Gluster Community - GlusterFS: <http://goo.gl/vDyvu>; Question: How big can I build a Gluster cluster?: <http://goo.gl/QHGT0>; [Gluster-users] mkfs.xfs inode size question: <http://goo.gl/ra44c>; Chapter
13:29 glusterbot nueces: 8. Setting up Red Hat Storage Volumes - Red Hat Customer ...: <http://goo.gl/Ax7wx>; Which file system should I use with Gluster?: <http://goo.gl/zoV78>; GlusterFS - Funtoo Linux: (1 more message)
13:31 mynameisdeleted ohh.. centos 6 uses kernel 2.6.32... xfs is bad chocie if kernel version is older than 2.6.39
13:31 mynameisdeleted its great filesystem in general but not for some scenerios like that
13:31 mynameisdeleted a modern linux distro should be usign 3.x kernel
13:32 Norky kernel 3 was not released/stable when EL 6 was being developed
13:32 JoeJulian Well with glusterfs, ext[34] is broken, so ...
13:32 mynameisdeleted more recent kernel is best answer?
13:32 mynameisdeleted + ext4
13:33 awheeler I thought only ext4 was broken.
13:33 mynameisdeleted +xfs I mean
13:33 johnmark plus a lot has been backported to that 2..6.32 series
13:33 johnmark given that Centos pulls from RHEL sources
13:33 JoeJulian awheeler: Nope, same change applies to 3
13:34 mynameisdeleted now.. proper cpu use of gluster is 10 times the cpu usage of cp?
13:34 Norky RHEL/CentOS are intended to be stable, and not change significantly over time, not bleeding edge, but see https://access.redhat.com/se​curity/updates/backporting/
13:34 glusterbot <http://goo.gl/9EZir> (at access.redhat.com)
13:34 JoeJulian If you're following the bug, though, there's been some progress as well as a possible workaround.
13:34 mynameisdeleted I think cos6 is too old to be affected by the ext4 bug too likely
13:35 JoeJulian @ext4
13:35 glusterbot JoeJulian: Read about the ext4 problem at http://goo.gl/PEBQU
13:35 JoeJulian Go read that blog post.
13:35 Norky mynameisdeleted, that was already linked. Read it.
13:38 mynameisdeleted ahh
13:38 mynameisdeleted is ext3 broken too?
13:39 Norky recent versions of ext3 have the same change which breaks Gluster, yes
13:39 mynameisdeleted but 3.2.9 kernel will work
13:40 Norky vanilla 3.2.9, yes, I believe so
13:40 mynameisdeleted recomendation is use latest glusterfs + xfs  + latest kernel?
13:40 glusterbot New news from newglusterbugs: [Bug 923228] 3.4 Alpha 2 Breaks swift file posting <http://goo.gl/nec4f>
13:41 Norky for glusterfs, I recommend using XFS with a recent kernel
13:41 mynameisdeleted which centos6 is not
13:41 Norky I suggest sticking to the officially supported kernel for your distro though
13:41 mynameisdeleted 2.6.32-279.22.1.el6.x86_64
13:42 Norky I'm using CentOS 6 with 2.6.32-358.2.1.el6.x86_64
13:42 Norky that is recent
13:43 mynameisdeleted glusterfs-3.3.1-1.el6.x86_64
13:43 mynameisdeleted so those 2 with xfs should be good?
13:43 mynameisdeleted for openstack use?
13:43 baz_ left #gluster
13:43 Norky it's not as big a number as 3.8, however it has many things backported from more recent releases. I suggest you stop obsessing over the 'latest/greatest'
13:43 mynameisdeleted I couldnt get openstack to work with 2x2 replication.. only 4x1
13:44 mynameisdeleted unless I mounted nfs
13:44 mynameisdeleted instead of gluster+fuse
13:44 GreyFoxx Anyone here used glusterfs with  a mail storage backend?   I'm considering using it to replace our existing old san  for pop/imap storage accessed with dovecot
13:50 mynameisdeleted how does gluster compare to moosfs?
13:55 lpabon joined #gluster
13:55 lpabon joined #gluster
13:55 Staples84 joined #gluster
13:57 manik joined #gluster
14:00 jbrooks joined #gluster
14:10 glusterbot New news from newglusterbugs: [Bug 921215] Cannot create volumes with a . in the name <http://goo.gl/adxIy>
14:20 hagarth joined #gluster
14:21 jdarcy joined #gluster
14:21 torbjorn1_ mynameisdeleted: I haven't tried or looked at MooseFS myself, but there is this: http://hekafs.org/index.php/​2012/11/trying-out-moosefs/
14:21 glusterbot <http://goo.gl/KrN63> (at hekafs.org)
14:23 torbjorn1_ I've been looking at calculating my theoretical IOPS lately, and I'm wondering if anyone has any way of measuring your max IOPS ?
14:23 sgowda joined #gluster
14:23 torbjorn1_ I tried to do "dd if=/dev/zero of=somefile bs=8M count=1000", hoping that a buffered and sequential write would enable the disks to do a more or less optimal write
14:24 torbjorn1_ I was looking at iostat output while that was going on, and it seemed to hover around ~2100 TPS
14:24 bulde joined #gluster
14:26 bulde1 joined #gluster
14:27 Jedblack joined #gluster
14:29 Jedblack hi peeps: I have a strange issue and was hoping someone can shed some light on the issue, Gluster 3.3.1 Centos 6.3.  I am trying to mount a replicated gluster volume (2 bricks) in read-only via 'mount' command and /etc/fstab.  However it errors out saying that its an unknown attribute.  Online docs for mount.glusterfs show that -o ro is a valid option…. any ideas?
14:31 ndevos Jedblack: sounds like bug 853895
14:31 glusterbot Bug http://goo.gl/xCkfr medium, medium, ---, csaba, ON_QA , CLI: read only glusterfs mount fails
14:32 Jedblack ahh thanks so much !  i was beginning to think i reached the end of 'google' :)
14:33 jdarcy joined #gluster
14:35 theron joined #gluster
14:35 jdarcy_ joined #gluster
14:36 bugs_ joined #gluster
14:36 samppah torbjorn1_: can't say if there is a easy way to calculate max iops but you should run test same time from different clients
14:50 manik joined #gluster
14:55 daMaestro joined #gluster
14:56 aliguori joined #gluster
15:10 bennyturns joined #gluster
15:13 abyss^_ Can I mount gluster with noatime?
15:14 kshlm|AF1 joined #gluster
15:14 Myal joined #gluster
15:16 semiosis abyss^_: you can try... worst case it will be ignored
15:17 semiosis you can definitely put it on your brick mounts though
15:19 awheeler You'll want the underlying fs to be mounted noatime as well.
15:20 abyss^_ semiosis: I wonder how it would be working on glusterfs:) I understand under gluster but with gluter?:))
15:24 vshankar joined #gluster
15:27 Nagilum joined #gluster
15:27 tryggvil__ joined #gluster
15:27 manik joined #gluster
15:28 manik joined #gluster
15:30 guigui joined #gluster
15:33 manik joined #gluster
15:37 hybrid512 Hi
15:37 glusterbot hybrid512: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:38 hybrid512 is it possible to use multiple NICs in a GlusterFS setup ? one "public" network and one "replication" network ?
15:38 semiosis hybrid512: with nfs clients, yes
15:38 hybrid512 semiosis: and with native client ?
15:38 semiosis but since fuse clients do replication themselves, not with them
15:39 StucKman also: https://bugzilla.redhat.com/show_bug.cgi?id=831699
15:39 glusterbot <http://goo.gl/SrRIn> (at bugzilla.redhat.com)
15:39 glusterbot Bug 831699: low, unspecified, ---, jdarcy, NEW , Handle multiple networks better
15:39 hybrid512 so if I want to use mount.glusterfs, it has to be on the same network, thait's it ?
15:39 StucKman (we hit this problem at work)
15:40 semiosis hybrid512: same network?  no, as long as a route exists between the networks it can work, though beware latency with impact performance
15:41 hybrid512 semiosis: to clarify, here is what I'm trying to do ... I have 4 nodes with 2 NICs on each and 2 client servers with one NIC on each.
15:42 hybrid512 I wanted to create a "gluster network" for performance concerns between my 4 nodes using one 10.0.0.x network on eth1 and have these nodes beeing available to my client servers through the 192.168.0.x network via eth0
15:43 semiosis uh huh
15:43 semiosis you can do that
15:43 hybrid512 when I create the volume on the nodes, everything seems fine (with 10.0.0.x IPs) but when I want to mount my share on the client servers (with 192.168.0.x IP), it is not working and I have plenty of DNS resolution errors in my logs
15:44 hybrid512 I'm not using IPs but dns names but my machines are all defined in the /etc/hosts file
15:46 hybrid512 Just to finish, I already tried GlusterFS with a 2 nodes setup and only one NIC per node and averything was working fine, I just wanted to try a more complex setup but it seems not working
15:47 semiosis hybrid512: i'd check routes & iptables
15:47 semiosis ...after making sure (again) that the hosts files are correct
15:48 hybrid512 semiosis: humm ... routes probably ... iptables has been disabled
15:52 morse joined #gluster
15:53 bala joined #gluster
15:54 dustint joined #gluster
15:54 sgowda joined #gluster
15:57 tryggvil_ joined #gluster
16:06 ultrabizweb joined #gluster
16:06 Guest77353 joined #gluster
16:06 DWSR joined #gluster
16:06 mooperd_ joined #gluster
16:18 GabrieleV joined #gluster
16:22 tryggvil_ joined #gluster
16:24 GabrieleV joined #gluster
16:25 jdarcy joined #gluster
16:29 sripathi joined #gluster
16:32 shylesh joined #gluster
16:36 manik joined #gluster
16:41 _pol joined #gluster
16:42 frakt joined #gluster
16:42 _pol joined #gluster
16:42 lanning joined #gluster
16:45 sgowda joined #gluster
16:49 zaitcev joined #gluster
16:50 tryggvil_ joined #gluster
16:50 sonne joined #gluster
16:52 manik joined #gluster
17:01 mohankumar joined #gluster
17:08 hagarth joined #gluster
17:17 jdarcy joined #gluster
17:23 bulde joined #gluster
17:24 y4m4 joined #gluster
17:40 wN joined #gluster
17:50 Mo____ joined #gluster
17:57 wrale joined #gluster
18:00 edong23 joined #gluster
18:00 hagarth joined #gluster
18:03 Jedblack joined #gluster
18:09 stickyboy joined #gluster
18:10 tryggvil__ joined #gluster
18:13 Oneiroi joined #gluster
18:17 andreask joined #gluster
18:18 samppah hmh.. it's over 8 hours already since i started geo-replication and it still hasn't transferred anything
18:34 hateya joined #gluster
18:37 lpabon joined #gluster
18:41 glusterbot New news from newglusterbugs: [Bug 923398] NFS problem <http://goo.gl/0UbO5>
18:54 edong23 joined #gluster
19:01 disarone joined #gluster
19:07 johnmark samppah: ouch :(
19:34 jdarcy joined #gluster
19:35 H__ samppah: it took 30 hours for my setup to start
19:35 H__ samppah: I hope you have 3.3.1
19:45 camel1cz joined #gluster
19:45 samppah H__: huh.. how much data you had? :)
19:45 __Bryan__ joined #gluster
19:46 samppah H__: this is on 3.4 alpha2
19:47 camel1cz Hi guys... does anyone run the KVM extensions in glfs3.4? How far is production stability?
19:48 stickyboy camel1cz: Well Gluster 3.4 isn't "stable" yet... so...
19:50 samppah camel1cz: do you mean libgfapi? i did some tests and it seemed to work very well.. 3.4 is still in alpha stage and i'm waiting for libgfapi support in ovirt
19:52 H__ samppah: back then it was around 2M directories and 10M files.
19:53 H__ samppah: 3.4 should be fine. I should have said hope you have >=3.3.1
19:53 lkoranda joined #gluster
19:53 camel1cz stickyboy: ...and have you any expirience with it?
19:55 samppah H__: oh well, i think i'll just leave it running.. thank you very much :)
19:55 camel1cz samppah: And do you think it's able to handle not too demanding production system running KVM? Or would you go w/o libgfapi (on 3.3) and better wait?
19:56 stickyboy camel1cz: Nope, sorry.  I just know that it's not marked as "stable" by the devs, so should be treated as such. :D
19:57 camel1cz stickyboy: Well, I know a lot of apps being not stable for years with great stability :-D that's why I'm asking...
19:58 * camel1cz thinking about installing 3.3 to store KVM images and wait for 3.4
19:59 stickyboy camel1cz: :D
19:59 stickyboy True.  I guess you might as well ask.
20:01 _pol_ joined #gluster
20:04 samppah camel1cz: i have been using 3.4 with ovirt for couple weeks and haven't hit any issues that glusterfs could have caused (at least that's what i think :)
20:04 samppah this is pure test environment though..
20:07 camel1cz Hm... should do some tests... ugh, why is day only 24 hours? :-D
20:09 camel1cz samppah: Thanks for you time, dude!
20:09 camel1cz stickyboy: samppah: nite guys
20:09 camel1cz left #gluster
20:10 elyograg timezones are fun.  nite at 2pm. :)
20:11 stickyboy 23:00 here. :D
20:12 hagarth almost 2 am here. :D
20:15 jskinner_ joined #gluster
20:23 kbsingh joined #gluster
20:35 nueces joined #gluster
20:39 92AAB81LC joined #gluster
20:39 ramkrsna joined #gluster
20:39 ramkrsna joined #gluster
20:39 ramkrsna joined #gluster
20:44 awheeler I've got a 4-node cluster, 2 replica distributed volumes.  I've intentionally killed off one of the nodes to simulate failure of the node.  What's the best way to replace that node?
20:44 awheeler This is using 3.3
20:45 awheeler The new node has the same hostname
20:45 awheeler but different IP
20:49 semiosis awheeler: ,,(replace)
20:49 glusterbot awheeler: Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/4hWXJ ... or if replacement server has same hostname:
20:49 glusterbot http://goo.gl/rem8L
20:49 semiosis that one ^^^
20:49 awheeler Doesn't work
20:50 semiosis try harder?
20:50 awheeler The mount points on the replacement server never get created/populated and the daemons aren't running for those bricks.
20:50 awheeler I have the volume list showing up, but that's it.
20:50 semiosis restart glusterd
20:51 semiosis that should spawn the missing daemons
20:51 awheeler No dice
20:51 semiosis check brick log file(s)
20:51 semiosis what do you mean mount points never get created/populated?
20:51 semiosis that could be something you need to fix manually
20:51 semiosis not sure i understand
20:51 awheeler The local copies
20:51 awheeler the real file system source copies.
20:52 awheeler the bricks, lol
20:52 semiosis if your brick directories are missing glusterfsd (brick export daemon) will fail to start
20:52 awheeler So, how do those get created?  Not in the doc.
20:52 semiosis brick log files will say for sure
20:53 semiosis for example, if your brick path was server1:/bricks/vol1/data -- where /bricks/vol1 was the mount point for /dev/sdb1, and data/ was a subdir on that mount, then you'd need to mount the fs, and create the empty data/ dir
20:53 semiosis check brick log file
20:53 semiosis to see if that's what is happening
20:54 awheeler On the new server?
20:54 semiosis yes
20:54 awheeler at /var/lib/glusterd/vols/<volume>/...log?
20:54 semiosis no, /var/log/glusterfs/bricks
20:54 awheeler ah
20:55 awheeler Hmm, it's complaining about the directories not existing -- but I didn't create them in the first place.
20:55 awheeler So, shall I just create them then?
20:55 awheeler the volume create made those dirs.
20:57 semiosis yes that's all correct
20:57 semiosis safety feature so glusterfs won't fill your root partition if the bricks arent mounted
20:57 awheeler Ah, gotcha.  Good to know.  :)
20:58 * semiosis would prefer if volume create didnt even mkdir
20:58 awheeler The bricks have started, and I would guess gluster volume heal will do the trick
20:58 semiosis heal should happen automatically
20:58 awheeler semiosis: I agree.  I appreciate consistency, and either it always should, or never should.
20:58 semiosis glustershd (self heal daemon)
20:59 awheeler hmm, heal isn't working, at least the gluster volume heal command
21:00 semiosis bbiab, coffee time
21:02 Jedblack joined #gluster
21:03 awheeler Looks like they've all healed up now.
21:09 awheeler So, that was the key -- creating the directories.
21:11 awheeler Doc should be updated to specify that the directories must be created.
21:12 semiosis awheeler: please do, it's a public wiki :)
21:12 semiosis (i assume you mean the replace article)
21:16 sjoeboo_ joined #gluster
21:16 tryggvil__ joined #gluster
21:19 ramkrsna joined #gluster
21:19 awheeler Done
21:20 awheeler Thanks for the suggestions.
21:20 semiosis yw, glad i could help
21:20 awheeler Is there a 3.3 page?
21:21 awheeler I understand healing got a nice boost in 3.3.
21:22 semiosis a 3.3 page?
21:22 theron Hey all. from the webpage for downloads I get pushed into the 3.4.0alpha directory, but looking at the download website there's a 3.4.0alpha2 dir.  good to use?
21:23 semiosis @latest
21:23 glusterbot semiosis: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
21:24 semiosis johnmark: update http://www.gluster.org/download/ to point to the alpha2 directory for 3.4
21:24 glusterbot Title: Download | Gluster Community Website (at www.gluster.org)
21:24 _pol joined #gluster
21:24 semiosis theron: good to use... for what purpose?  3.4 isn't GA yet so you might not want to run it in production, but if you're trying glusterfs out, sure go ahead
21:25 semiosis alpha2, why not :)
21:28 Gilbs1 I created a new volume and got this after it finished:  gluster> Segmentation fault (core dumped)
21:28 Gilbs1 It started ok, should I worry or ignore?
21:28 semiosis Gilbs1: what version of glusterfs?
21:28 Gilbs1 3.3.1
21:28 semiosis distro/version?
21:29 Gilbs1 CentOS 6.2
21:29 semiosis weird
21:29 semiosis does it segfault consistently?
21:29 semiosis reproducibly?
21:30 Gilbs1 let me see
21:30 * tqrst has had plenty of inconsistent, unreproducible segfaults with 3.3.1 on centos :\
21:30 theron semiosis, just testing :)
21:30 theron semiosis, thanks :)
21:30 semiosis yw
21:31 Gilbs1 semiosis: I have one server that takes a good minute to bring up any gluster info via gluster volume info or gluster> volume info.  I did segfault once before, but after viewing the logs i noticed my DNS was wrong.  My second gluster box reports back very fast using gluster command or shell.
21:33 Gilbs1 semiosis: the slow box reports no volume, but if I run the command again, it will report the correct volume.
21:33 semiosis check the glusterd log on that box for problems, /var/log/glusterfs/etc-glusterfs-glusterd.log
21:41 Gilbs1 semiosis: don't see any wrong doing in the logs.  He mounted and is replicating/distro as it should.  Hmmmmmm....
21:45 Gilbs1 I'm going to forgive and forget, I can't stay mad at you gluster!
21:52 ramkrsna joined #gluster
21:55 tyl0r joined #gluster
22:02 stoile Hi. I have a symlink to a glusterfs mount, i "touch" a file in there and there is no file there. How can I find out what is going wrong?
22:06 hattenator joined #gluster
22:09 andreask stoile: a symlink to  glusterfs native client mount? ... and touching on the mount directly works fine?
22:10 stoile nope.
22:11 stoile I get no error message, that's what is strange.
22:11 andreask also not on the server?
22:12 stoile Hmm, seems my xfs fs is broken. :-/
22:12 stoile ls: reading directory .: Structure needs cleaning
22:13 stoile I think, I'll try my luck with ext4. Should be no problem on a new kernel if I don't use gNFS, right?
22:14 aliguori joined #gluster
22:16 semiosis ~ext4 | stoile
22:16 glusterbot stoile: Read about the ext4 problem at http://goo.gl/PEBQU
22:17 semiosis it will be a problem whether or not you use nfs
22:18 semiosis stoile: did you do anything unusual, like maybe fixing split-brain?
22:18 stoile I think I read something about using dir_index or some other option, which forces the fs to return 32bit values... Hmm...
22:19 stoile semiosis: Not really. I am just trying this out. I have a very... interesting configuration, with 2 Raspberry Pis and GlusterFS on 2 USB Sticks. So there are a lot of strange things that can go wrong...
22:20 semiosis wow
22:20 semiosis idk what causes structure needs cleaning, or how to fix it :(
22:20 semiosis would like to know
22:20 stoile Inexpensive playground, if you don't like VMs. :-D
22:23 stoile So, if ext2/3/4 has problems and I can't get xfs running, that leaves me with every other fs that supports xattr, right? Hmm...
22:40 badone_ joined #gluster
22:44 sjoeboo_ joined #gluster
22:45 badone_ joined #gluster
22:51 badone joined #gluster
22:54 JoeJulian stoile: I was just reading about usb issues with rpi. It uses a usb controller that has proprietary drivers. Those drivers are known to lose packets. It's not uncommon for keystrokes to be lost from usb keyboards during writes to usb sticks so it wouldn't surprise me if it failed the usb stick too.
22:54 JoeJulian I happened across that while looking for dropped usb keyboard events for another piece of hardware I'm using.
22:55 stoile JoeJulian: Thanks, I'll keep that in mind.
22:56 stoile JoeJulian: I'll try btrfs, just for fun. If I get the same issues, I will need another storage solution.
22:56 * JoeJulian needs to buy one to play with.
23:01 Jedblack joined #gluster
23:03 johnmorr joined #gluster
23:04 jdarcy joined #gluster
23:04 17WABB5G8 joined #gluster
23:10 cyberbootje1 joined #gluster
23:20 al joined #gluster
23:30 ninkotech__ joined #gluster
23:46 sjoeboo joined #gluster
23:47 lpabon joined #gluster
23:50 lpabon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary