Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 edong23 joined #gluster
00:08 edong23 joined #gluster
00:13 pureflex joined #gluster
01:32 edong23 joined #gluster
01:34 DV joined #gluster
01:44 harish_ joined #gluster
02:01 d-fence joined #gluster
02:01 ThatGraemeGuy joined #gluster
02:08 fraggeln_ joined #gluster
02:18 pureflex joined #gluster
02:19 partner joined #gluster
02:34 Pupeno_ joined #gluster
02:35 Ark joined #gluster
02:36 sjm joined #gluster
02:55 bharata-rao joined #gluster
03:03 kshlm joined #gluster
03:09 gildub joined #gluster
03:36 XpineX joined #gluster
03:38 pureflex joined #gluster
03:44 coredump joined #gluster
03:50 itisravi joined #gluster
03:56 pureflex joined #gluster
03:57 shubhendu_ joined #gluster
03:59 kumar joined #gluster
04:04 nbalachandran joined #gluster
04:07 kanagaraj joined #gluster
04:10 bala joined #gluster
04:11 RameshN joined #gluster
04:24 [o__o] joined #gluster
04:28 dusmant joined #gluster
04:36 rastar joined #gluster
04:36 haomaiwa_ joined #gluster
04:40 spandit joined #gluster
04:42 vimal joined #gluster
04:43 Matthaeus joined #gluster
04:52 nshaikh joined #gluster
04:53 [o__o] joined #gluster
04:54 rjoseph joined #gluster
04:56 koobs1 joined #gluster
04:58 kdhananjay joined #gluster
04:58 [o__o] joined #gluster
05:00 ramteid joined #gluster
05:00 prasanthp joined #gluster
05:04 rastar joined #gluster
05:05 hagarth joined #gluster
05:12 vikumar joined #gluster
05:12 kanagaraj_ joined #gluster
05:12 spandit_ joined #gluster
05:12 shubhendu__ joined #gluster
05:12 prasanth|afk joined #gluster
05:12 itisravi_ joined #gluster
05:12 kaushal_ joined #gluster
05:12 RameshN_ joined #gluster
05:13 navid__ joined #gluster
05:13 sac`away` joined #gluster
05:13 dusmantkp_ joined #gluster
05:13 rastar_ joined #gluster
05:13 kdhananjay1 joined #gluster
05:14 rjoseph1 joined #gluster
05:15 prasanthp joined #gluster
05:16 hchiramm_ joined #gluster
05:17 karnan joined #gluster
05:18 nishanth joined #gluster
05:20 vpshastry joined #gluster
05:20 ppai joined #gluster
05:22 majeff joined #gluster
05:29 lalatenduM joined #gluster
05:33 prasanthp joined #gluster
05:34 deepakcs joined #gluster
05:35 davinder15 joined #gluster
05:38 psharma joined #gluster
05:38 meghanam joined #gluster
05:38 meghanam_ joined #gluster
05:42 aravindavk joined #gluster
05:45 coredumb Hello, to enable ACLs on a volume, is it possible to just remount my FS partitions with acl flag without putting off the volume ?
05:46 saurabh joined #gluster
05:48 nshaikh joined #gluster
05:48 rjoseph joined #gluster
05:49 kanagaraj joined #gluster
05:49 dusmant joined #gluster
05:59 hchiramm__ joined #gluster
06:00 rgustafs joined #gluster
06:04 Ark joined #gluster
06:10 raghu` joined #gluster
06:34 jtux joined #gluster
06:37 rjoseph joined #gluster
06:38 hchiramm__ joined #gluster
06:41 bala joined #gluster
06:42 hagarth joined #gluster
06:46 majeff joined #gluster
06:49 majeff left #gluster
06:54 ekuric joined #gluster
07:00 glusterbot New news from newglusterbugs: [Bug 1093217] [RFE] Gluster module (purpleidea) to support HA installations using Pacemaker <https://bugzilla.redhat.com/show_bug.cgi?id=1093217>
07:01 dusmant joined #gluster
07:05 ctria joined #gluster
07:08 eseyman joined #gluster
07:11 keytab joined #gluster
07:15 Nightshader joined #gluster
07:21 ricky-ti1 joined #gluster
07:22 d-fence joined #gluster
07:28 ktosiek joined #gluster
07:30 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.com/show_bug.cgi?id=1075611>
07:42 jvandewege joined #gluster
07:43 fsimonce joined #gluster
07:54 keytab joined #gluster
07:59 nthomas joined #gluster
07:59 rtalur__ joined #gluster
07:59 prasanth|brb joined #gluster
07:59 prasanth_ joined #gluster
07:59 lalatenduM_ joined #gluster
07:59 meghanam__ joined #gluster
07:59 vpshastry1 joined #gluster
07:59 kanagaraj_ joined #gluster
07:59 dusmantkp_ joined #gluster
07:59 itisravi joined #gluster
08:01 kdhananjay joined #gluster
08:02 kaushal_ joined #gluster
08:03 mbukatov joined #gluster
08:03 meghanam_ joined #gluster
08:04 Norky joined #gluster
08:06 shubhendu__ joined #gluster
08:06 kanagaraj__ joined #gluster
08:06 vpshastry joined #gluster
08:06 dusmantkp__ joined #gluster
08:06 prasanthp joined #gluster
08:06 lala__ joined #gluster
08:06 kshlm joined #gluster
08:06 ghenry joined #gluster
08:06 ghenry joined #gluster
08:06 rastar_ joined #gluster
08:06 prasanth|afk joined #gluster
08:06 itisravi_ joined #gluster
08:06 kdhananjay1 joined #gluster
08:06 hagarth joined #gluster
08:07 meghanam joined #gluster
08:07 karnan joined #gluster
08:07 sac`away joined #gluster
08:08 ppai joined #gluster
08:09 rjoseph joined #gluster
08:10 meghanam_ joined #gluster
08:15 Philambdo joined #gluster
08:16 spandit_ joined #gluster
08:28 fraggeln_ does anyone have a good config for glusterfs 3.5 and a shitload of small files?
08:28 fraggeln_ I get terrible performance.
08:29 fraggeln joined #gluster
08:34 suliba joined #gluster
08:35 fraggeln 220mb, 40k files
08:36 fraggeln and performance is shit, can someone point me in a good direction where I can start looking?
08:41 saurabh joined #gluster
08:42 Slashman joined #gluster
08:49 _polto_ joined #gluster
08:52 hagarth joined #gluster
08:53 liquidat joined #gluster
09:04 kumar joined #gluster
09:06 jtux joined #gluster
09:09 imad_VI joined #gluster
09:10 nthomas joined #gluster
09:10 aravindavk joined #gluster
09:19 imad_VI Hi guys, I'm trying to set up replication on 2 servers. I declared the link between them with "gluster peer probe @" and then I've created the volume like that ==> root@host1:gluster volume create my_volume replica 2 host1:/folder1 host2:/folder1. My problem is that when I create a file in a host it does'nt replicate it on the oher, someone have an idea ?
09:21 fraggeln imad_VI: you cant just create a file on one of the servers in their own filesystems
09:21 fraggeln you need to connect to the volume using either nfs or the glusterfs-client
09:22 imad_VI Thank you fraggeln, do I have to mount the filesystem on both servers ?
09:23 hagarth joined #gluster
09:24 fraggeln well, depends on what you want to do with it
09:26 fraggeln if you want your servers to access the replicated filesystem, you need to mount it on themselvs.
09:26 fraggeln if your servers has no need for it, you dont need to .
09:27 imad_VI All rigth, tanks again fraggeln.
09:27 fraggeln no worries.
09:27 fraggeln but, a good rule is, always access the volume you created using a client, (nfs or glusterfsclient)
09:30 glusterbot New news from newglusterbugs: [Bug 1109175] [SNAPSHOT] : Snapshot list should display origin volume (RFE) <https://bugzilla.redhat.com/show_bug.cgi?id=1109175> || [Bug 1040355] NT ACL : User is able to change the ownership of folder <https://bugzilla.redhat.com/show_bug.cgi?id=1040355>
09:34 Ark joined #gluster
09:34 haomaiwang joined #gluster
09:36 imad_VI Ok fraggeln, so it's not a good way to put redundancy between 2 servers ?
09:45 haomaiwang joined #gluster
09:48 deepakcs joined #gluster
09:50 haomaiwang joined #gluster
10:00 glusterbot New news from newglusterbugs: [Bug 949096] [FEAT] : Inconsistent read on volume configured with cluster.quorum-type auto <https://bugzilla.redhat.com/show_bug.cgi?id=949096> || [Bug 978297] Glusterfs self-heal daemon crash on split-brain replicate log too big <https://bugzilla.redhat.com/show_bug.cgi?id=978297> || [Bug 1029337] Deleted files reappearing <https://bugzilla.redhat.com/show_bug.cgi?id=1029337>
10:01 haomaiwang joined #gluster
10:21 tziOm joined #gluster
10:23 haomaiwang joined #gluster
10:23 RameshN joined #gluster
10:28 fraggeln imad_VI: well, why not. but it will not replicate one filesystem to another, you still need to use the client to access it :)
10:30 fraggeln imad_VI: if your goal is to replicate data between 2 servers, and nothing more, I dont know if glusterfs is the best way to do that.
10:35 bala joined #gluster
10:38 haomaiwa_ joined #gluster
10:42 vkoppad joined #gluster
10:42 imad_VI fraggeln: Actually, there are users who need acces to this data accross ftp and I want to share instantly those data between the 2 servers which are composing my cluster.
10:44 hyperbole_ joined #gluster
10:44 bala2 joined #gluster
10:44 aravindavk joined #gluster
10:47 haomai___ joined #gluster
10:48 hyperbole_ Hi, we are running glusterFS on two nodes with one brick on each node. We are now looking to migrate bricks from EXT4 to ZFS. What's the best way to migrate with minimal disruption? Mount ZFS on /brick_new, add brick_new to Gluster volume, rebalance, remove old EXT4 brick, repeat on the second node? Or is there a better way? Thanks
10:53 nbalachandran joined #gluster
10:54 pdrakeweb joined #gluster
11:04 tdasilva joined #gluster
11:07 itisravi_ hyperbole_: you don't need to rebalance before remove-brick.  'remove-brick start' does the rebalance for you
11:12 hyperbole_ itisravi_ : thanks. is there a way to pause "remove-brick" so that we can only run this out of hours only as I suspect rebalancing will put more I/O load
11:13 ekuric left #gluster
11:13 itisravi_ hyperbole_: you could "stop" it: Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force>
11:13 ekuric joined #gluster
11:14 hyperbole_ itisravi_: does it pick up from where it left when I start again ?
11:16 itisravi_ hyperbole_: i'm guessing it should. whatever has been moved already to the destination is not present in the source brick anymore.
11:17 hyperbole_ itisravi_: that makes sense. Thank you. Oh and one last question, do I need to add bricks in pairs on each node or can I do one node at the time?
11:19 itisravi_ hyperbole_: If you have a replicate configuration then you need to add as many bricks as there are replicas. If it is a plain distribute volume, there is no restriction.
11:24 ppai joined #gluster
11:25 gildub joined #gluster
11:25 hyperbole_ itisravi_:yes have "Type: Replicate". So if I understand this right I have to "add-brick node1:/zfs", "add-brick node2:/zfs" only then do "remove-brick node1:/ext4" and "remove-brick node2:/etx4" ? Thanks
11:26 Nightshader Question: Is it possible to gain volume snapshot features somehow?
11:26 Nightshader hyperbole_: May I ask you about the ZFS migration in PM?
11:27 hyperbole_ Nightshader: sure
11:28 itisravi_ hyperbole_:  just so that I have a clear idea, what does "Number of Bricks:" show when you run gluster volume info <volname>
11:29 hyperbole_ itisravi_ : 2
11:29 itisravi_ hyperbole_: you mean 1 x 2 = 2 ?
11:34 hyperbole_ itisravi_:http://pastebin.com/5aH6jyhv
11:34 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
11:34 hyperbole_ itisravi_:http://fpaste.org/112158/23266140/
11:34 glusterbot Title: #112158 Fedora Project Pastebin (at fpaste.org)
11:35 lalatenduM joined #gluster
11:36 itisravi_ hyperbole_: got it..looks like you are running an older version. AFAIK atleast from glusterfs 3.4 it shows 1x 2 =2.
11:39 edward1 joined #gluster
11:39 diegows joined #gluster
11:39 hyperbole_ itisravi_: version 3.2.7
11:40 itisravi_ hyperbole_:   gluster volume add brick gv0  node1:/zfs  node2:/zfs followed by gluster volume remove brick gv0 node1:/ext4 node2:/ext4  start
11:40 itisravi_ hyperbole_: Then monitor with gluster volume remove brick gv0 node1:/ext4 node2:/ext4  status
11:40 hyperbole_ also just noticed "Migration Volumes" section in the docs. Could I just use "replace-brick"?
11:40 itisravi_ hyperbole_: once it's done: gluster volume remove brick gv0 node1:/ext4 node2:/ext4  commit
11:41 hyperbole_ itisravi_: thanks.
11:41 itisravi_ hyperbole_: TBH I don't know about 3.2..in the latest releases, replace-brick is not recommended
11:42 hyperbole_ itisravi_: thank you once again
11:42 itisravi_ hyperbole_: welcome :)
11:47 nthomas joined #gluster
11:47 liquidat joined #gluster
11:48 hagarth joined #gluster
11:49 w1ntermute joined #gluster
11:58 qdk joined #gluster
12:01 Slashman_ joined #gluster
12:02 davinder15 joined #gluster
12:06 kanagaraj__ joined #gluster
12:06 kanagaraj joined #gluster
12:09 liquidat joined #gluster
12:12 itisravi_ joined #gluster
12:14 davent joined #gluster
12:14 davent Is it possible to reset the counter for the ports used by bricks?
12:15 davent i.e. make the ports chosen for the bricks start at 49152 again?
12:15 Nightshader Is ZFS supported as brick filesystem? Pro's/Con's?
12:17 Ark joined #gluster
12:20 tdasilva_ joined #gluster
12:24 davent left #gluster
12:27 rjoseph joined #gluster
12:34 firemanxbr joined #gluster
12:35 dusmantkp__ joined #gluster
12:36 ppai joined #gluster
12:46 gildub joined #gluster
12:51 sroy_ joined #gluster
12:51 vpshastry joined #gluster
12:51 edong23 joined #gluster
12:54 hagarth joined #gluster
13:00 julim joined #gluster
13:01 glusterbot New news from newglusterbugs: [Bug 1112260] build: Glusterfs library file not compiled with RELRO or PIE <https://bugzilla.redhat.com/show_bug.cgi?id=1112260>
13:05 ctria joined #gluster
13:20 dusmantkp__ joined #gluster
13:20 koobs Nightshader: If not, that would be EPIC :]
13:22 RicardoSSP joined #gluster
13:37 bennyturns joined #gluster
13:44 nbalachandran joined #gluster
13:48 japuzzo joined #gluster
13:49 haomaiwa_ joined #gluster
13:51 vpshastry joined #gluster
13:52 haomai___ joined #gluster
13:58 kshlm joined #gluster
14:05 rotbeard joined #gluster
14:11 sjm joined #gluster
14:19 msciciel_ joined #gluster
14:20 msciciel joined #gluster
14:22 cfeller joined #gluster
14:23 elico joined #gluster
14:28 wushudoin joined #gluster
14:36 coredump joined #gluster
14:36 _polto_ joined #gluster
14:37 _polto_ joined #gluster
14:37 chirino joined #gluster
14:38 vpshastry joined #gluster
14:39 mortuar joined #gluster
14:49 stickyboy "Error: One or more connected clients cannot support the feature being set."
14:49 stickyboy Errr.
14:50 lmickh joined #gluster
14:56 mortuar joined #gluster
14:57 harish joined #gluster
15:02 JoeJulian stickyboy: version mismatch
15:05 ndk joined #gluster
15:06 jtux joined #gluster
15:06 stickyboy JoeJulian: So a client connected to the named node is old?
15:07 JoeJulian stickyboy: That's what that means, yes.
15:07 stickyboy I checked all servers; ansible storage -a 'yum list installed gluster*'
15:08 stickyboy Hmmm, must be something in memory then, as my client nodes are all on the same version (3.5).
15:10 jag3773 joined #gluster
15:17 dusmant joined #gluster
15:19 daMaestro joined #gluster
15:21 ramteid joined #gluster
15:25 stickyboy JoeJulian: Any tips on narrowing down which client it is?
15:27 kanagaraj joined #gluster
15:35 zaitcev joined #gluster
15:36 hchiramm_ joined #gluster
15:38 glusterbot New news from resolvedglusterbugs: [Bug 764731] Support SSL in socket transport <https://bugzilla.redhat.com/show_bug.cgi?id=764731>
15:47 * tom[] likes _netdev
15:47 kiwikrisp joined #gluster
15:48 stickyboy tom[]: Why? :)
15:48 tom[] it allows my servers to boot!
15:49 tom[] otherwise mounting gluster is attempted before the network is up and the boot sequence hangs
15:49 stickyboy tom[]: I was hoping you'd say that.
15:50 stickyboy My boot sequence hangs, CentOS 6.5.
15:50 stickyboy I don't reboot very often, but I notice it when I do.
15:52 tom[] i read mountall honors _netdev for fstype=glusterfs is relatively new and distro dependent
15:52 tom[] but i don't believe everything i read on the internet
15:53 davinder15 joined #gluster
15:55 jmarley joined #gluster
15:55 jmarley joined #gluster
16:00 Pupeno joined #gluster
16:00 stickyboy tom[]: Which distro are you on?
16:00 tom[] ubuntu 14.04
16:01 tom[] just started with it a couple of weeks ago
16:02 stickyboy tom[]: Nice.  CentOS 6.5 is "supposed" to work.  But grr.
16:03 kiwikrisp fresh install of gluster 3.5 on new CentOS 6.5 install. 2 node replica using NFS to host VHD for XenServer 6.2. My gluster process runs >100% and starts eating all the memory (>87%) then stops NFS access even though the volume status says everything is good?? I've been using these same machines with gluster versions 3.3 and 3.4 without incident but was hoping to take advantage of some of the 3.5 updates. Am I
16:03 kiwikrisp missing something? Is there an unflushed bug with 3.5? Should I just downgrade to 3.4??
16:04 JordanHackworth joined #gluster
16:06 tom[] kiwikrisp: i'm no gluster expert but that sounds like a good reason to report a bug, if there isn't already one, and to use 3.4
16:09 jbrooks left #gluster
16:11 siel joined #gluster
16:14 Slashman joined #gluster
16:22 stickyboy JoeJulian: Found the client. :)
16:25 rturk joined #gluster
16:30 BradLsys joined #gluster
16:31 nage joined #gluster
16:31 jbrooks joined #gluster
16:31 BradLsys joined #gluster
16:31 baojg joined #gluster
16:32 baojg joined #gluster
16:48 d-fence joined #gluster
16:50 tg2 joined #gluster
16:51 Mo__ joined #gluster
16:52 diegows joined #gluster
16:57 kiwikrisp tom[]: good call, looks like somebody beat me to the punch bug 1099270 is the likely culprit. Sure wish that bug would have been mentioned in the 3.5 release notes. I would've saved myself a whole bunch of time and trouble. Looks like it's slated to be resolved in 3.5.2. So if you're using NFS, don't upgrade to 3.5, that's the bottom line.
16:57 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1099270 high, unspecified, ---, rgowdapp, ASSIGNED , Gluster 3.5.0 NFS server crashes under load
16:57 JoeJulian If it's not in the release notes, I would venture to guess that it was found after release.
16:58 JoeJulian Pretty likely since if it was found before release, it would have been fixed.
17:04 baojg_ joined #gluster
17:04 imad_VI left #gluster
17:05 kkeithley filed 19-May-2014. Yeah, it was found after 3.5.0, and pretty late in the 3.5.1 release cycle.
17:05 kiwikrisp JoeJulian: Yes, it was found 5/19 release was 4/17. Would be nice if when these types of bugs are discovered that the release notes on the website would be updated because it's pretty critical. Understand that folks are busy. Not a big deal, I'll just update my process to include checking for bugs in the release before upgrading to it.
17:10 kkeithley you want release notes (for 3.5.0) updated after the release with info about bugs found after the release?
17:19 Guest70894 is xfs recommended over ext4 for gluster bricks ?
17:20 JoeJulian Depends who you ask.
17:21 JoeJulian More recent analysis shows them to be comparable, perhaps leaning more toward xfs depending on how fast your hardware is.
17:23 chirino joined #gluster
17:26 nage joined #gluster
17:29 Matthaeus joined #gluster
17:30 gmcwhistler joined #gluster
17:37 jobewan joined #gluster
17:41 calum_ joined #gluster
17:47 doekia joined #gluster
17:47 doekia_ joined #gluster
17:50 sjm joined #gluster
17:58 Bullardo joined #gluster
18:05 [o__o] joined #gluster
18:21 [o__o] joined #gluster
18:21 theron joined #gluster
18:21 theron joined #gluster
18:22 fraggeln I do this: gluster> volume heal compass01 info
18:22 fraggeln Gathering list of entries to be healed on volume compass01 has been successful
18:22 fraggeln and I find 3 entries
18:23 fraggeln how do I see progress of healing, and can I speed it up somehow?
18:24 JoeJulian You cannot see the progress. You may be able to approximate progress based on disk usage and/or file sizes on the bricks, but there's no way if you have block differences to know that ahead of time.
18:25 fraggeln okay, its only 3 failes as far as I can see, and it has been like this for almost 6 hours
18:25 JoeJulian A long time ago we had a "turbo" button we could press to make things go faster (and, of course, that button was never disengaged). Eventually they decided it was a stupid marketing ploy and removed it.
18:26 JoeJulian Check your client logs.
18:26 JoeJulian and glustershd.log(s)
18:26 [o__o] joined #gluster
18:26 fraggeln No. of heal failed entries: 3
18:26 fraggeln thats often bad right?
18:26 JoeJulian Well, there are varying degrees of "bad".
18:27 JoeJulian Check the logs to find out why it's failing.
18:27 fraggeln [2014-06-23 18:22:38.553201] W [client-rpc-fops.c:574:client3_3_readlink_cbk] 0-compass01-client-1: remote operation failed: Stale NFS file handle
18:30 [o__o] joined #gluster
18:31 fraggeln JoeJulian: what should I look for?
18:31 fraggeln [2014-06-23 18:22:38.553225] I [afr-self-heal-entry.c:1538:afr_sh_entry_impunge_readlink_sink_cbk] 0-compass01-replicate-0: readlink of <gfid:bb48c274-8913-452d-943b-88ea2dfb9778>/classpreloader.php on compass01-client-0 failed (Stale NFS file handle)
18:31 fraggeln that seems bad.
18:32 JoeJulian I would look for " E "
18:32 fraggeln after the timestamp?
18:33 JoeJulian grep ' E ' /var/log/glusterfs/glustershd.log
18:35 zaitcev joined #gluster
18:35 [o__o] joined #gluster
18:39 fraggeln JoeJulian: nothing important as it seems
18:39 fraggeln last E is "[2014-06-23 13:12:33.318912] E [client-handshake.c:1742:client_query_portmap_cbk] 0-compass01-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running."
18:40 JoeJulian That's on all the servers?
18:40 [o__o] joined #gluster
18:41 fraggeln yep
18:41 fraggeln all 3 of them
18:41 fraggeln this is output of volume heal
18:41 fraggeln http://pastebin.com/M1YREQFp
18:41 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:42 fraggeln ok, fpaste it is :)
18:42 fraggeln sorry mr bot ;)
18:43 JoeJulian Those all look like directories, is that so?
18:43 BradLsys_ joined #gluster
18:44 Pupeno_ joined #gluster
18:45 fraggeln JoeJulian: correct
18:46 JoeJulian Check the ,,(extended attributes) for one of those directories on all three bricks.
18:46 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
18:46 [o__o] joined #gluster
18:47 [o__o] joined #gluster
18:47 mjsmith2 joined #gluster
18:48 chirino joined #gluster
19:00 fraggeln JoeJulian: thank you very much.
19:02 fraggeln JoeJulian: while you are awake, is there any specialtrick that I can apply on my rig, I have a shitload of small files that needs to be handled.
19:02 fraggeln an url that points me in the right direction would be nice :)
19:03 fraggeln I get like 20-25mbit on a gig network :/
19:09 JoeJulian ~php | fraggeln
19:09 glusterbot fraggeln: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH
19:10 glusterbot --negative-timeout=HIGH --fopen-keep-cache
19:11 fraggeln it not just php, also escenic ;)
19:11 fraggeln but, I will investigate that one first.
19:11 fraggeln thanks
19:14 fraggeln JoeJulian: last stupid question, can I just add those flags in /etc/fstab?
19:15 JoeJulian fraggeln: yes
19:16 JoeJulian And, although the link is for php since that's the most common question, the philosophies and problems are similar.
19:17 fraggeln we use varnish and memcached :)
19:17 fraggeln and apc
19:17 JoeJulian excellent
19:17 JoeJulian stat=0?
19:17 fraggeln ?
19:18 JoeJulian apc.stat=0 avoids stat lookups to see if the files have changed. Saves lots of iops at the cost of having to issue a reload if you change the software.
19:19 fraggeln ab-gluster-file01:/compass01 /var/www  glusterfs defaults,_netdev,attribute-timeout=HIGH,entry-timeout=HIGH,negative-timeout=HIGH,fopen-keep-cache 0 0 <-- looks about right?
19:20 fraggeln dunno about this application that we are using for testing gluster, but our bigger php-sites have apt.stat=0
19:21 JoeJulian looks right
19:22 fraggeln thanks
19:22 fraggeln Ill give it a go with a copy then
19:23 fraggeln uhh, performance dropped below 18mbit :D
19:24 LebedevRI joined #gluster
19:24 fraggeln small files is a killer :/
19:26 fraggeln JoeJulian: would you use 3.5 for production?
19:29 JoeJulian no
19:29 JoeJulian (sorry hagarth)
19:30 JoeJulian 3.4.4-2 is the version I would recommend.
19:33 fraggeln oh :)
19:33 fraggeln we are on 3.5
19:33 fraggeln maybe thats why we have so much problems with performance.
19:33 Ark joined #gluster
19:33 JoeJulian Meh, I'm not sure about that. There are other bugs that keep it off my recommended list.
19:34 JoeJulian 3.5.2 is looking good though...
19:34 mjsmith2 joined #gluster
19:35 bennyturns fraggeln, what is your workload like?  Are you looking at a small file workload almost all of the time?
19:36 fraggeln bennyturns: yea
19:36 bennyturns fraggeln, are you using 1 big RAID for a brick?  how are your bricks configured?
19:36 fraggeln 2 of the boxes has 6sas-disks in raid10
19:37 fraggeln and the 3rd box has a compellent-san attaced using 8gbit FC
19:37 fraggeln OS is installed on its own raid1 local on all the boxes.
19:38 fraggeln dual quadcores with 32gb ram
19:39 bennyturns fraggeln, in my experience with small files the more bricks you have the better your perf will be.  I suggest trying a couple different configs and see what works best in your workload
19:39 bennyturns fraggeln, I have testes with JBOD 3 way, RAID 10, RAID 1s and  JBOD
19:39 bennyturns and RAID 6.
19:40 fraggeln so, smaller bricks and maybe 10 bricks / server instead of just 1?
19:40 bennyturns JBOD and a bunch of RAID1s outperformed a single brick in those cases
19:41 fraggeln intresting
19:41 bennyturns fraggeln, I owuld test with different configs.  Try a bunch of RAID1s
19:41 fraggeln yea, I can do that.
19:41 bennyturns fraggeln, the one thing I _hann't_ done yet is try 1 RAID 6  carved into multiple LUNs
19:41 fraggeln since, when I do a large file, like a debian iso, its lightningfast
19:41 bennyturns yup
19:42 fraggeln 700mbit, no problem.
19:42 bennyturns the fall off is somewhere around 64 KB in file size?
19:42 bennyturns iirc?
19:43 bennyturns I usually write > 1MB block sizes to maxamize speed for my tests
19:44 fraggeln bennyturns: what fs do you use on your bricks?
19:44 fraggeln xfs?
19:44 bennyturns fraggeln, FYI this is what I use for my benchmarks https://github.com/bengland2/smallfile
19:44 glusterbot Title: bengland2/smallfile · GitHub (at github.com)
19:45 bennyturns fraggeln, yes XFS with -i size=512
19:45 fraggeln someone should write a good howto regarding glusterfs and small files ;)
19:46 bennyturns fraggeln, here is a good perf howto:
19:46 bennyturns http://rhsummit.files.wordpress.com/2013/07/england_th_0450_rhs_perf_practices-4_neependra.pdf
19:50 bennyturns fraggeln, LMK what you see!  We are working on recommendations for small files workloads and trying to find the sweet spot for a mixed workload.  Hopefully there will be someting written up and vailable soon
19:50 jag3773 joined #gluster
19:50 jag3773 joined #gluster
19:51 JoeJulian bennyturns: I thought performance tests showed no difference with 512byte inode sizes.
19:52 bennyturns JoeJulian, prolly force of habit, I have been doing it so long
19:52 fraggeln bennyturns: will to.
19:53 fraggeln since my raid is fast, I will just try to add 3 more bricks on each node, and let it rebalance over night.
19:53 fraggeln if it goes smooth, I will rebuild the raid10 and do separate raid1 instead.
19:53 JoeJulian fraggeln: Don't forget, part of the cost of "small files" is the tcp overhead.
19:53 bennyturns fraggeln, sweet!  once 3.6 foes out I am gonna revisit my multiple LUNs on 1 big raid idea
19:53 JoeJulian Also, don't search for files that aren't there.
19:54 JoeJulian Don't rebalance unless you're on 3.5.1 (in the 3.5 tree) or 3.4.4-2.
19:57 fraggeln JoeJulian: im using the debian-packages
19:57 fraggeln glusterfs-server                   3.5.0-1
19:58 JoeJulian That does not change my statement.
19:58 JoeJulian There is a bug that will crash all your clients, including the one doing the rebalance.
19:59 fraggeln oh
19:59 fraggeln that sounds bad;)
19:59 fraggeln ohh, impressive
20:00 fraggeln I just added 2 more bricks on each node
20:00 JoeJulian Tell me about it when you have 100 compute nodes crash...
20:00 fraggeln performance just trippled
20:00 fraggeln intresting
20:01 bennyturns fraggeln, what are you using as a benchmark?
20:01 fraggeln bennyturns: time + cp ;)
20:01 fraggeln then bmon to look at the interfaces
20:01 JoeJulian cp is going to suck just from it's own buffer size.
20:02 JoeJulian won't even fill up a jumbo frame.
20:02 bennyturns fraggeln, I use:
20:02 bennyturns dd if=dev/zero of=/my-gluster-mount bs=1024k count=1000 conv=sync
20:03 bennyturns for just messing around
20:03 bennyturns I also have ascript that drops cache across all servers / clients so I am not testing in cache
20:03 JoeJulian I'm lazy so I say 1M instead of 1024k
20:04 fraggeln well, bigger files is not an issue
20:04 JoeJulian echo 3>/proc/sys/vm/drop_caches
20:04 fraggeln bs should be like 16k ;)
20:04 bennyturns http://fpaste.org/112372/35538541/
20:04 glusterbot Title: #112372 Fedora Project Pastebin (at fpaste.org)
20:05 bennyturns fraggeln, yaya lower the BS to what you need :)
20:05 JoeJulian We have trouble lowering the BS in this channel.
20:05 fraggeln http://fpaste.org/112369/53666140/ <-- does that look sane at all? :)
20:05 glusterbot Title: #112369 Fedora Project Pastebin (at fpaste.org)
20:05 fraggeln JoeJulian: ^^
20:05 JoeJulian You have 64 io channels?
20:06 fraggeln i doubt it :)
20:06 JoeJulian Then that's not very sane. :D
20:06 bennyturns JoeJulian, is replica 3 safe on 3.4?  I thought is more of a 3.6 thing
20:06 bennyturns oops I mena 3.5
20:06 JoeJulian I've been safely using replica 3 since 3.0
20:06 fraggeln JoeJulian: what is a good value then? 8?
20:07 JoeJulian Depends on your hardware.
20:08 fraggeln 2457600000 bytes (2.5 GB) copied, 100.047 s, 24.6 MB/s
20:08 fraggeln that was with 32k bs
20:09 fraggeln same dd on local disks
20:09 fraggeln 2457600000 bytes (2.5 GB) copied, 1.89004 s, 1.3 GB/s
20:09 fraggeln ;)
20:09 bennyturns for a replica of 3 over gigabit I would expect ~30 MB/sec with a greater than 64k block size
20:09 bennyturns replica of 2 would get ~60
20:10 JoeJulian writes, not reads...
20:10 bennyturns ^^^^
20:10 bennyturns ya
20:10 JoeJulian reads depends on load.
20:10 fraggeln bennyturns: yea, but i get 30Mbit/s ;)
20:12 JoeJulian what frame size?
20:13 fraggeln MTU is default, ie 1500
20:13 fraggeln no jumboframes yet
20:20 fraggeln 6553600000 bytes (6.6 GB) copied, 292.86 s, 22.4 MB/s with a BS=64kb
20:20 fraggeln it "ok" i guess
20:22 bennyturns fraggeln, the best you will ever see with replica 3 is line speed / 3
20:22 fraggeln at the moment, its 1x1gbit
20:23 bennyturns ya so you are not far off
20:23 bennyturns 120 MB / sec is gigabit
20:23 fraggeln 409600000 bytes (410 MB) copied, 8.70145 s, 47.1 MB/s
20:23 fraggeln that was with 4k bs
20:23 bennyturns its a small file, you are prolly seeing server side caching
20:23 fraggeln but, that was done from one of the servers ;)
20:24 bennyturns gluster doesn't cache client side, but servers do
20:24 fraggeln can I do something on the clients to help the servers?
20:24 bennyturns thats why I drop cache everywhere before perf tests
20:26 bennyturns fraggeln, I mean you may be able to eek out an extra 8 MB / sec but is that going to tip the scale for you?  What is your goal here?
20:28 fraggeln bennyturns: the goal is to serve 900k images to 4 webfronts ;)
20:28 fraggeln with good enough speed :)
20:28 bennyturns mostly read workload?
20:28 fraggeln yea
20:29 bennyturns what isgood enough speed?
20:29 fraggeln the webfronts will have local ssd's to write scaled images on.
20:30 bennyturns so read gluster, re scale image to local hdd?
20:30 fraggeln I guess current speed is good enough, when all images are in place, the main problem is that it takes to long to rsync the current images from our old nfs-server
20:30 fraggeln yea, mostly reads.
20:31 fraggeln editors will save images to it when updating articles, but its like 99,99% read during a 24h window.
20:32 fraggeln when doing dd it get good speed, when I do cp/rsync it takes ages :D
20:32 bennyturns fraggeln, can you TAR things up first and send them  in a batch?
20:33 fraggeln yea, all pictures are devided in to month-folders.
20:35 glusterbot New news from newglusterbugs: [Bug 1094860] Puppet-Gluster should support building btrfs bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1094860>
20:37 JoeJulian Anything over MTU*~100 isn't going to see that much overhead from the lookup() self-heal check. If you're seeing too much overhead, my guess would be misses.
20:37 JoeJulian @lucky dht-misses are expensive
20:37 glusterbot JoeJulian: http://joejulian.name/blog/dht-misses-are-expensive/
20:38 * bennyturns will brb
20:51 fraggeln lol, my varnish did kick the webfront using glusterfs out :D
20:51 fraggeln it was to slow on the response
21:01 MacWinner joined #gluster
21:04 koobs1 joined #gluster
21:04 eshy joined #gluster
21:13 Ark joined #gluster
21:23 MacWinner does it look like 3.5.1 is almost ready for release?  Saw a bunch of patch notes today on BZ for the tracker bug
21:23 JoeJulian beta2 or 3 is out...
21:23 JoeJulian @qa
21:23 JoeJulian @beta
21:23 glusterbot JoeJulian: I do not know about 'beta', but I do know about these similar topics: 'beta-yum', 'yum-beta'
21:23 JoeJulian @yum-beta
21:23 glusterbot JoeJulian: The official community glusterfs packges for RHEL 6 (including CentOS, SL, etc.), Fedora 17-19 (i386, x86_64, arm, armhfp), and Pidora are available at http://goo.gl/LGV5s
21:24 JoeJulian wow... that's an old factoid...
21:24 JoeJulian but still accurate
21:24 sjm joined #gluster
21:26 JoeJulian Oh... MacWinner, "[09:11] <ndevos> thanks hchiramm_, I might do the 3.5.1 release later today, or tomorrow morning :)"
21:27 MacWinner thanks!
22:05 _polto_ joined #gluster
22:05 _polto_ joined #gluster
22:06 systemonkey joined #gluster
22:38 asku joined #gluster
22:45 sjm left #gluster
22:56 fidevo joined #gluster
23:30 SpeeR joined #gluster
23:32 SpeeR looking for some help, if anyone is around. We upgraded to gluster 3.5 this weekend, everything seemed to be working well, until one of the servers crashed today. now we are receiving input/output error on large files
23:45 JoeJulian SpeeR: What's the client log say wrt to that file?
23:59 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary