Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 MugginsM joined #gluster
00:08 kam270 joined #gluster
00:09 aixsyd_ joined #gluster
00:09 lijiejun joined #gluster
00:09 aixsyd_ Wow guys... im shocked at how great my GlusterFS performance is
00:10 semiosis music to my ears
00:10 aixsyd_ ......once I upgraded to 10gig Infiniband
00:10 semiosis hahhaa
00:10 aixsyd_ Before: http://i.imgur.com/qVDxKaK.jpg
00:10 aixsyd_ after: i.imgur.com/ExF2EWe.png
00:10 aixsyd_ after: http://i.imgur.com/ExF2EWe.png
00:11 semiosis looks like 10x improvement
00:12 semiosis large sequential ops are the best case scenario
00:12 semiosis i suspec you might have seen even more dramatic improvement if you'd tested small random ops
00:13 semiosis nice work!
00:13 * semiosis afk
00:13 aixsyd_ Lemme test
00:13 semiosis leaving office now but i'll check in later & read scrollback
00:14 aixsyd_ rgr that
00:23 premera joined #gluster
00:26 aixsyd_ semiosis: not a ton of difference. maybe 50-100IOPs
00:26 kam270 joined #gluster
00:29 discretestates joined #gluster
00:35 coredum__ joined #gluster
00:35 kam270 joined #gluster
00:37 hybrid512 joined #gluster
00:38 lijiejun joined #gluster
00:38 pk joined #gluster
00:40 vpshastry1 joined #gluster
00:47 jmalm joined #gluster
00:54 badone joined #gluster
00:56 Alex aixsyd: I like to think that somehow upgrading to infiniband removed your ability to take a screenshot :p
00:56 NeatBasis joined #gluster
00:56 aixsyd Alex: It did.
00:56 aixsyd XD
01:03 chirino_m joined #gluster
01:03 nightwalk joined #gluster
01:09 vpshastry1 joined #gluster
01:16 jmarley__ joined #gluster
01:17 lijiejun joined #gluster
01:17 tdasilva joined #gluster
01:21 tdasilva left #gluster
01:24 bala joined #gluster
01:35 tokik joined #gluster
01:39 hflai joined #gluster
01:39 mattapperson joined #gluster
01:45 elico joined #gluster
01:50 lpabon joined #gluster
01:54 sputnik13 joined #gluster
01:54 harish joined #gluster
01:55 msciciel_ joined #gluster
01:59 MugginsM joined #gluster
02:01 crazifyngers joined #gluster
02:06 SJ1 left #gluster
02:06 qdk joined #gluster
02:11 nightwalk joined #gluster
02:14 pk left #gluster
02:15 cjanbanan joined #gluster
02:31 haomaiwa_ joined #gluster
02:32 jmalm joined #gluster
02:36 pk joined #gluster
02:36 pk left #gluster
02:37 raghug joined #gluster
02:39 chirino joined #gluster
02:53 B21956 joined #gluster
03:05 glusterbot New news from newglusterbugs: [Bug 1063190] [RHEV-RHS] Volume was not accessible after server side quorum was met <https://bugzilla.redhat.com/show_bug.cgi?id=1063190>
03:24 badone joined #gluster
03:26 shubhendu joined #gluster
03:27 sks joined #gluster
03:28 haomai___ joined #gluster
03:40 RameshN joined #gluster
03:41 kanagaraj joined #gluster
03:43 sahina joined #gluster
03:43 sputnik13 joined #gluster
03:49 badone_ joined #gluster
03:51 itisravi joined #gluster
04:01 ndarshan joined #gluster
04:02 saurabh joined #gluster
04:05 glusterbot New news from newglusterbugs: [Bug 1077452] Unable to setup/use non-root Geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1077452>
04:13 discretestates joined #gluster
04:22 latha joined #gluster
04:24 ngoswami joined #gluster
04:24 bharata-rao joined #gluster
04:34 raghu joined #gluster
04:50 tokik joined #gluster
04:51 prasanth_ joined #gluster
04:52 sks joined #gluster
04:59 ppai joined #gluster
05:01 ravindran joined #gluster
05:04 cjanbanan joined #gluster
05:05 deepakcs joined #gluster
05:06 atinm joined #gluster
05:07 chirino_m joined #gluster
05:17 kdhananjay joined #gluster
05:17 nightwalk joined #gluster
05:20 nueces joined #gluster
05:29 bala1 joined #gluster
05:32 shylesh joined #gluster
05:34 hagarth joined #gluster
05:35 glusterbot New news from newglusterbugs: [Bug 1017993] gluster processes call call_bail() at high frequency resulting in high CPU utilization <https://bugzilla.redhat.com/show_bug.cgi?id=1017993>
05:44 vpshastry1 joined #gluster
05:45 tokik joined #gluster
05:47 rahulcs joined #gluster
05:48 lalatenduM joined #gluster
05:49 pk1 joined #gluster
05:55 tshefi joined #gluster
05:58 nshaikh joined #gluster
05:58 benjamin_____ joined #gluster
06:00 aravindavk joined #gluster
06:02 ravindran joined #gluster
06:08 spandit joined #gluster
06:09 KORG joined #gluster
06:12 lijiejun joined #gluster
06:13 Philambdo joined #gluster
06:17 psharma joined #gluster
06:22 vikumar joined #gluster
06:27 mohankumar joined #gluster
06:35 bala2 joined #gluster
06:36 FarbrorLeon joined #gluster
06:45 chirino joined #gluster
06:47 sahina joined #gluster
06:50 hagarth joined #gluster
06:51 kanagaraj joined #gluster
07:18 jtux joined #gluster
07:20 jtux joined #gluster
07:35 cjanbanan joined #gluster
07:39 kanagaraj joined #gluster
07:42 JonnyNomad joined #gluster
07:48 bala1 joined #gluster
07:56 Pavid7 joined #gluster
07:59 sahina joined #gluster
08:01 hagarth joined #gluster
08:04 eseyman joined #gluster
08:05 ctria joined #gluster
08:18 nightwalk joined #gluster
08:22 slayer192 joined #gluster
08:24 cjanbanan joined #gluster
08:35 fsimonce joined #gluster
08:41 chirino_m joined #gluster
08:49 dusmant joined #gluster
08:50 raghu` joined #gluster
08:58 andreask joined #gluster
09:02 kanagaraj joined #gluster
09:03 sticky_afk joined #gluster
09:04 stickyboy joined #gluster
09:06 ndarshan joined #gluster
09:06 tryggvil joined #gluster
09:06 sahina joined #gluster
09:07 shubhendu joined #gluster
09:09 bala1 joined #gluster
09:09 Pavid7 joined #gluster
09:10 ppai joined #gluster
09:10 ravindran joined #gluster
09:11 liquidat joined #gluster
09:11 chirino joined #gluster
09:11 tryggvil joined #gluster
09:11 X3NQ_ joined #gluster
09:12 RameshN joined #gluster
09:16 rjoseph joined #gluster
09:20 skryzhny joined #gluster
09:22 saravanakumar1 joined #gluster
09:26 monotek i got my problem with samba and gluster vfs only working on localhost solved. just missed an option. now its no problem to access external server.
09:26 monotek all needed configuration is now available in the ppa: https://launchpad.net/~monotek/+archive/samba-vfs-glusterfs
09:26 glusterbot Title: samba-vfs-glusterfs : André Bauer (at launchpad.net)
09:36 meghanam joined #gluster
09:40 jmarley__ joined #gluster
09:43 RameshN joined #gluster
09:47 rgustafs joined #gluster
10:00 qdk joined #gluster
10:06 skryzhny Hello All. I updated gluster from 3.1 to 3.3 and changed from distributed (2 bricks) to distributed-replicated (2x2 bricks)
10:07 skryzhny after almost day I see on new replica bricks only few Gigs.
10:08 skryzhny Do I need to run some command to start replication of all content?
10:08 skryzhny like rebalance or fix-layout?
10:09 skryzhny I have about 200 GB on old bricks, on new bricks (replicas) I see only about 2-3 Gigs
10:12 fraggeln Im no expert, but I think mirrors/replicas are only updated on write-access to the file.
10:13 fraggeln but there is a command to make it do that again
10:13 fraggeln volume heal <vol> full
10:14 fraggeln I think that will start the replication.
10:14 fraggeln but I'm not sure, im new to this as well.
10:22 Alpinist joined #gluster
10:22 Alpinist joined #gluster
10:22 andreask joined #gluster
10:24 tryggvil joined #gluster
10:25 Slash joined #gluster
10:27 gdubreui joined #gluster
10:30 Slash joined #gluster
10:41 jbustos joined #gluster
10:44 abyss^ fraggeln: volume rebalance would help you: gluster volume rebalance volname start - but be aware that command take a lot of CPU
10:48 ctria joined #gluster
10:49 fraggeln abyss^: I dont need help mate ;)
10:50 shubhendu joined #gluster
10:51 RameshN joined #gluster
10:52 kanagaraj joined #gluster
10:52 ndarshan joined #gluster
10:52 sahina joined #gluster
10:53 bala1 joined #gluster
10:57 abyss^ fraggeln: my bad;) it was to skryzhny ;)
11:03 ppai joined #gluster
11:07 Pavid7 joined #gluster
11:14 lpabon joined #gluster
11:16 andreask joined #gluster
11:19 pk1 joined #gluster
11:23 mattappe_ joined #gluster
11:23 Pavid7 joined #gluster
11:25 21WAAJCHV joined #gluster
11:26 ctria joined #gluster
11:27 aravindavk joined #gluster
11:29 nightwalk joined #gluster
11:29 B21956 joined #gluster
11:41 edward1 joined #gluster
11:44 chirino_m joined #gluster
11:46 Pavid7 joined #gluster
11:47 sas joined #gluster
11:48 sas lalatenduM, ping
11:48 sas lalatenduM, how to join the hangout ? any link available ?
11:49 hagarth sas: I think johnmark will send an invite
11:49 sas hagarth, thanks for that information
11:51 diegows joined #gluster
11:53 andreask joined #gluster
11:56 sasundar joined #gluster
11:59 sasundar johnmark, do we have a hangout link ?
12:05 sprachgenerator joined #gluster
12:06 lalatenduM sasundar, sorry was in another meeting
12:06 sasundar sasundar, ok
12:17 DR_D12525252 left #gluster
12:18 itisravi_ joined #gluster
12:19 dusmant joined #gluster
12:19 deepakcs joined #gluster
12:20 robo joined #gluster
12:33 Alpinist joined #gluster
12:33 tdasilva joined #gluster
12:36 jag3773 joined #gluster
12:39 Chr1st1an joined #gluster
12:40 rwheeler joined #gluster
12:41 kkeithley1 joined #gluster
12:41 Chr1st1an Hello
12:41 glusterbot Chr1st1an: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:42 Chr1st1an Anyone got any tips in regards to a gluster volume rebalance issue where some of the nodes fails due to not being able to connect to all the bricks
12:43 Chr1st1an Got a 10node with 2 bricks on each and when trying to start rebalance of the volume it fails on 4 of them and the weird part is that its random what node fails
12:44 Chr1st1an So if i stop the volume and then start it again its 4 other nodes that fails , but looking at the gluster v status all the bricks are up and running
12:44 Chr1st1an on all nodes
12:46 rwheeler joined #gluster
12:48 lijiejun joined #gluster
12:53 vcauw_ joined #gluster
12:54 bene joined #gluster
12:54 warci joined #gluster
12:56 jag3773 joined #gluster
12:57 warci hello all,
12:58 warci i'm having some trouble with the nfs side of gluster
12:58 Bardack joined #gluster
12:58 warci when i open a share on windows using the nfs client, i can access the share just fine
12:59 Alpinist joined #gluster
12:59 warci and i can open some folders, but some others i can't
12:59 warci what's more, the folders i can access i can see the nfs properties and the ones i can't don't have an nfs tab
13:00 warci on linux i see similar weird things, some directories are: d?????????   ? ?   ?       ?            ? <folder name>
13:05 theron joined #gluster
13:08 Pavid7 joined #gluster
13:10 warci and this is completely random... sometimes it's one directory, sometimes it's another one
13:12 vpshastry2 joined #gluster
13:13 primechuck joined #gluster
13:15 chirino joined #gluster
13:18 coredump_ joined #gluster
13:20 hagarth joined #gluster
13:24 tryggvil joined #gluster
13:25 sroy_ joined #gluster
13:28 jtux joined #gluster
13:28 benjamin_____ joined #gluster
13:29 CORP\konvalyuk joined #gluster
13:30 CORP\konvalyuk left #gluster
13:31 CORP\konvalyuk joined #gluster
13:33 deepakcs joined #gluster
13:34 ArtRet joined #gluster
13:37 rahulcs joined #gluster
13:40 ravindran1 joined #gluster
13:40 latha joined #gluster
13:47 ninkotech joined #gluster
13:51 sasundar joined #gluster
13:56 rahulcs joined #gluster
13:56 Pavid7 joined #gluster
14:00 JPeezy joined #gluster
14:08 vpshastry2 joined #gluster
14:09 jclift warci: There doesn't seem to be many people around atm.  It'll probably be better to email the gluster-users mailing list about it, mentioning which server OS you're using and which Gluster version too. :)
14:09 jclift Chr1st1an: Same thought for you. :) ^^^
14:09 * jclift heads off to meeting
14:10 sputnik13 joined #gluster
14:12 robo joined #gluster
14:13 ninkotech_ joined #gluster
14:16 ninkotech_ joined #gluster
14:16 rahulcs joined #gluster
14:20 rpowell joined #gluster
14:24 seapasulli joined #gluster
14:25 seapasulli joined #gluster
14:28 jmalm joined #gluster
14:34 sprachgenerator joined #gluster
14:36 lalatenduM @mailinglists | warci
14:37 robo joined #gluster
14:38 harish_ joined #gluster
14:43 ctria joined #gluster
14:45 lijiejun joined #gluster
14:46 nightwalk joined #gluster
14:46 chirino_m joined #gluster
14:58 Pavid7 joined #gluster
15:05 jobewan joined #gluster
15:07 dbruhn joined #gluster
15:10 daMaestro joined #gluster
15:16 msciciel_ joined #gluster
15:18 benjamin_____ joined #gluster
15:19 lijiejun joined #gluster
15:21 jag3773 joined #gluster
15:24 thigdon joined #gluster
15:25 JPeezy joined #gluster
15:27 JPeezy joined #gluster
15:27 sputnik13 joined #gluster
15:29 sputnik13 joined #gluster
15:35 lijiejun joined #gluster
15:38 ndk joined #gluster
15:40 jmarley__ joined #gluster
15:43 ctria joined #gluster
15:47 JonnyNomad joined #gluster
15:50 coredump_ joined #gluster
15:56 lijiejun joined #gluster
15:57 discretestates joined #gluster
16:00 theron joined #gluster
16:08 jmarley joined #gluster
16:10 Mo__ joined #gluster
16:13 lmickh joined #gluster
16:14 theron joined #gluster
16:17 lijiejun joined #gluster
16:23 mohankumar joined #gluster
16:31 bennyturns joined #gluster
16:32 raghug joined #gluster
16:35 cjanbanan joined #gluster
16:39 zerick joined #gluster
16:43 Matthaeus joined #gluster
16:46 kaptk2 joined #gluster
16:49 zerick joined #gluster
16:55 Matthaeus joined #gluster
16:55 hagarth joined #gluster
17:04 lijiejun joined #gluster
17:05 sroy_ joined #gluster
17:16 lijiejun joined #gluster
17:17 robo joined #gluster
17:19 discretestates joined #gluster
17:23 robo joined #gluster
17:27 B21956 joined #gluster
17:32 Matthaeus joined #gluster
17:35 cjanbanan joined #gluster
17:40 slayer192 joined #gluster
17:43 primechuck joined #gluster
17:43 vpshastry1 joined #gluster
17:46 nightwalk joined #gluster
17:47 vpshastry1 left #gluster
17:48 robo joined #gluster
17:50 theron joined #gluster
17:53 rahulcs joined #gluster
17:59 robo joined #gluster
18:03 sas joined #gluster
18:04 lpabon joined #gluster
18:04 _dist joined #gluster
18:05 sroy_ joined #gluster
18:16 sputnik13 joined #gluster
18:17 sputnik13 joined #gluster
18:17 zaitcev joined #gluster
18:17 sputnik13 joined #gluster
18:18 Isaacabo joined #gluster
18:18 Isaacabo Hello guys
18:18 Isaacabo how all doing>
18:18 Isaacabo ?
18:18 sputnik13 joined #gluster
18:19 sputnik13 joined #gluster
18:20 Isaacabo guys, im seeing this error in my log
18:20 Isaacabo http://ur1.ca/gwzor
18:20 glusterbot Title: #88543 Fedora Project Pastebin (at ur1.ca)
18:20 Isaacabo did anyone recognized it?
18:20 sputnik13 joined #gluster
18:24 lijiejun joined #gluster
18:24 Isaacabo thats on server side
18:25 edward1 joined #gluster
18:25 cjanbanan joined #gluster
18:26 msvbhat Isaacabo: Those are warning messages. Do you see any functionality issues because of it?
18:28 Isaacabo with rsync
18:28 Isaacabo we run it once its ok, the a second time says there is no files
18:28 Isaacabo and this is the log from client
18:29 Isaacabo http://ur1.ca/gwzr4
18:29 glusterbot Title: #88546 Fedora Project Pastebin (at ur1.ca)
18:29 Isaacabo says layout is missing
18:35 cjanbanan joined #gluster
18:41 robo joined #gluster
18:44 joshin joined #gluster
18:44 joshin joined #gluster
18:46 msvbhat Isaacabo: Which mount are you using?
18:47 msvbhat Isaacabo: So you rsync the data from client to somewhere and after rsync is done, data is not availabe in client?
18:47 lmickh joined #gluster
18:50 Isaacabo sorry, for the delay
18:51 Isaacabo some data it is, rsync says that the file has vanished
18:51 Isaacabo but is on the gluster
18:52 Isaacabo but you ran the rsync again, will send it
18:52 Isaacabo we using the gluster native mount
18:53 rpowell left #gluster
18:56 theron joined #gluster
18:58 edward1 joined #gluster
19:00 gdubreui joined #gluster
19:02 jag3773 joined #gluster
19:05 cjanbanan joined #gluster
19:08 glusterbot New news from newglusterbugs: [Bug 1070539] Very slow Samba Directory Listing when many files or sub-direct​ories <https://bugzilla.redhat.com/show_bug.cgi?id=1070539>
19:13 Pavid7 joined #gluster
19:16 edward1 joined #gluster
19:21 lalatenduM joined #gluster
19:21 sputnik13 joined #gluster
19:21 lijiejun joined #gluster
19:24 ndevos thigdon: thanks for the email, I've setup a new environment for testing (fedora 20 i686 server, x86_64 client) and will try to debug it a little
19:24 thigdon ndevos: i've looked at it a bit further myself
19:24 ndevos thigdon: cool, found anything?
19:24 thigdon i think it may have only to do with the bittedness of the server
19:25 thigdon i'm not completely sure what is going on though
19:26 ndevos thigdon: my thinking was that multiple 'struct dirent' on a 32-bit arch can get packed closer together than on a 64-bit arch
19:26 thigdon where does this packing take place?
19:27 ndevos thigdon: that results in the 32-bit arch sending more dirent structs over the wire, than that the 64-bit client is expecting to write on one PAGESIZE block to /dev/fuse
19:27 ndevos well, something in that direction anyway
19:27 monotek is there any way to have another formating for "gluster volume status"?
19:27 monotek just trying to create new icinga check and get in problems because of linebrakes....
19:27 thigdon i think that's true.. but it seems that using multiple iovs in writev doesn't allow you to exceed PAGESIZE
19:28 lalatenduM monotek, I think there is -o xml for cli
19:28 lalatenduM ndevos, ping
19:29 ndevos thigdon: the 'packing' is simply done by a readdir() on the filesystem, the aligning of the struct dirent is done there, and the calculation of the size too
19:29 thigdon it is true that if you prevent the server from sending an extra dirent (by increasing the amount of padding it wants in posix_fill_readdir), that things start working
19:29 ndevos lalatenduM: yo!
19:29 andreask joined #gluster
19:29 andreask joined #gluster
19:29 semiosis --xml
19:29 lalatenduM ndevos, hey
19:29 ndevos thigdon: right, that is good to know, at least we're understaning the problem :)
19:29 semiosis works for most commands, but iirc not all
19:30 ndevos thigdon: but, I do not like to limit the size on the server-side, thats just wrong
19:30 mattappe_ joined #gluster
19:30 lalatenduM ndevos, I have a question on libgfapi,
19:30 lalatenduM semiosis, yup, that one :)
19:31 lalatenduM ndevos, the fop calls from libgfapi goes directly to sub volumes right?
19:31 ndevos lalatenduM: yes
19:31 lalatenduM ndevos, there should not be any glusterd involvement
19:32 lalatenduM ndevos, cool thanks
19:32 failshell joined #gluster
19:32 thigdon the server seems to be trying to limit to PAGESIZE already
19:32 mattapp__ joined #gluster
19:32 ndevos lalatenduM: well, you need to get the .vol file from glusterd (or pass the filename as servername, or something like that)
19:32 thigdon but when the response comes to the client, it unpacks it into a > PAGESIZE buffer
19:33 ndevos thigdon: yes, that is my understanding too - and limiting on PAGESIZE server-side does not make a lot of sense to me
19:33 lalatenduM ndevos, hmm
19:34 monotek semisosi. thanks :-)
19:34 lalatenduM ndevos, similar to fuse mount , I mean once the call reaches gluster client process
19:34 monotek ugly but does what i need....
19:35 ndevos lalatenduM: yes, glusterfs-fuse is a client, acting very simlar to any libgfapi clients
19:35 lalatenduM ndevos, cool, thanks :)
19:36 ndevos lalatenduM: you could actually rewrite glusterfs-fuse by using libgfapi
19:36 ndevos lalatenduM: you're very welcome to provide patches for my latest side toy: https://forge.gluster.org/mod_proxy_gluster
19:36 glusterbot Title: Apache Module mod_proxy_gluster - Gluster Community Forge (at forge.gluster.org)
19:36 lalatenduM ndevos, yup , I think thats the approach Ceph used and provided a kernel client
19:38 ndevos lalatenduM: I dont really know how the ceph protocol works, but for glusterfs you would need to implement many xlators kernel-side :-/
19:39 ndevos lalatenduM: I think it's possible to write a minimal glusterfs kernel-module, but it will not be as flexible as the xlator stack on libgfapi or glusterfs-fuse
19:39 semiosis the return of the booster
19:39 kkeithley_mtg ndevos: what if on a 32-bit client we limit the number of dirents to read and send to the clients? I.e. to whatever the max can be for 64-bit client.
19:39 lalatenduM ndevos, yeah agree
19:39 kkeithley_mtg max dirents that will fit in PAGESIZE
19:40 kkeithley_mtg Is that what you've already tried?
19:40 ndevos kkeithley_mtg: that is an option, but I still don't like it - the client should be able to split the dirents and do multiple writes
19:41 ndevos kkeithley_mtg: the client could request a max-number of dirents from the server, it now sends the max size in bytes of the reply
19:41 kkeithley_mtg indeed. But is it worth the extra complexity? For a 32-bit server?
19:42 robo joined #gluster
19:42 kkeithley_mtg just asking
19:42 ndevos PAGESIZE may not always be 4096 bytes, it should be more flexible
19:43 lalatenduM ndevos, mod_proxy_gluster looks a place where I can play with libgfapi, however more information about how to create a test setup would be helpful
19:43 ndevos it is possible to make a quick fix for it, but if you can send > PAGESIZE, the network gets optimized a little more
19:45 ndevos lalatenduM: 1. install httpd, 2. get a .repo file from http://copr.fedoraproject.org/coprs/devos/gluster-addons/, 3. read the README, 4. replace the .so and restart httpd
19:45 ndevos lalatenduM: ah, 2b, 'yum install mod_proxy_gluster'
19:46 lalatenduM ndevos, cool! should I add it to the wiki page? of mod_proxy_gluster
19:46 ndevos lalatenduM: I wrote it mostly to try out libgfapi too :D
19:46 lalatenduM ndevos, awesome !
19:46 ndevos lalatenduM: which wiki? that's where I'm stuck, see #gluster-dev from earlier
19:47 mattappe_ joined #gluster
19:47 ndevos lalatenduM: oh, you were missing, see https://botbot.me/freenode/gluster-dev/
19:47 glusterbot Title: Logs for #gluster-dev | BotBot.me [o__o] (at botbot.me)
19:48 semiosis ndevos: you can create whatever page you want on the gluster.org wiki for your project
19:48 theron joined #gluster
19:49 semiosis ndevos: fwiw, i'm pushing code to both github & the forge (when I remember to) for the ,,(java) projects
19:49 glusterbot ndevos: https://github.com/semiosis/libgfapi-jni & https://github.com/semiosis/glusterfs-java-filesystem
19:49 semiosis gitorious doesnt do much for docs
19:50 ndevos semiosis: I really dont want to go the 2-repo route
19:50 semiosis ndevos: i really do want to switch to gitlab for the forge
19:50 ndevos semiosis: so, you'd prefer the gluster.org (non-forge) wiki for the docs?
19:50 lijiejun joined #gluster
19:51 semiosis will there be a community meeting tomorrow?  i'll bring up the gitorious issue
19:51 ndevos I expect there will be one
19:51 * ndevos isnt working this week, so he may, or may not, join
19:51 kkeithley_mtg yes, community meeting is at 11:00 AM EDT
19:51 semiosis can i just add an item to the titanpad document?
19:51 semiosis to bring up tmrw
19:52 kkeithley_mtg you can (and you may)
19:52 semiosis ha, thanks1
19:52 semiosis !
19:52 lalatenduM ndevos, I understand you concern on forge wiki
19:52 nightwalk joined #gluster
19:53 ndevos lalatenduM: the wiki is difficult to work with (I could not get images displayes), and info gets even more scathered over many many wikis
19:54 lalatenduM ndevos, markdown format should work with images , nit sure why it is not working
19:54 lalatenduM ndevos, I think forge is for incubating projects, so till the project is incubating I think we can wiki, but once ready to push to mainline , we can put the markdown document with the code
19:55 lalatenduM ndevos, I agree with you though , on the issues of scattered docs
19:55 lalatenduM ndevos, I have updated https://forge.gluster.org/mod_proxy_gluster/pages/Home :)
19:55 glusterbot Title: Apache Module mod_proxy_gluster - Home - Open wiki - Gluster Community Forge (at forge.gluster.org)
19:56 ndevos lalatenduM: hehe
19:57 ndevos lalatenduM: I think you just made the desicion on what wiki to use :)
19:59 lalatenduM ndevos, haha, when working on a forge project, the nearest wiki should have information abt the project , it is just the case of convenience
19:59 ndevos lalatenduM: depends, users may search the gluster.org/community wiki...
20:00 ndevos lalatenduM: but anyway, I've now added a note to http://www.gluster.org/community/documentation/index.php/Getting_modglusterfs_to_work
20:00 glusterbot Title: Getting modglusterfs to work - GlusterDocumentation (at www.gluster.org)
20:00 siel joined #gluster
20:00 elico joined #gluster
20:01 ndevos mod_glusterfs seems to have existed at one point, I didnt know about it until I told johnmark what I was going to push to the forge...
20:01 semiosis ndevos: abandoned long ago
20:02 lalatenduM ndevos, nice, ideally there should be aome automated way to keep project wikis in forge and gluster.org wiki insync
20:02 ndevos semiosis: yeah, I noticed, I picked the commits that remove it, and linked to the last code
20:02 thigdon ndevos: i have a patch that does multiple writes on the client side and works for me for the problem we've been discussing
20:03 ndevos thigdon: awesome, can you ,,(paste) it?
20:03 glusterbot thigdon: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
20:03 thigdon sure give me a sec
20:04 ndevos I just finished a compile and test-run with added error messages: failed to write iovec (len=8304/count=3): Invalid argument
20:06 ravindran1 joined #gluster
20:07 ndevos semiosis: do you know if there is any interest in a mod_proxy_gluster, or was it dropped because nobody used it?
20:08 * ndevos actually wanted to write a module for nginx, but httpd has many more examples...
20:08 semiosis have to ask avati or hagarth why, it was history before I got involved with gluster around v3.1
20:08 semiosis JoeJulian might know
20:09 ndevos okay, thanks
20:09 siel joined #gluster
20:15 ravindran1 joined #gluster
20:15 cjanbanan joined #gluster
20:17 siel joined #gluster
20:22 sputnik13 joined #gluster
20:24 thigdon ndevos: http://paste.ubuntu.com/7153043/
20:24 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
20:25 ndevos thigdon: is there a reason why you changed that for the readdir-fop, and not in send_fuse_iov()
20:26 thigdon a single writev of greater than PAGESIZE fails
20:26 siel joined #gluster
20:27 ndevos thigdon: yes, but you can send N iovec in one writev(), and do other writev() calls with the same iovec starting at N+1
20:27 ndevos at least, that is what I am trying to do...
20:27 thigdon sure, but you'd have to push up the foh header, etc.
20:27 thigdon this makes for a simpler patch than doing all that would be i think
20:28 lijiejun joined #gluster
20:28 ndevos maybe, but I would love to remove the limit on PAGESIZE writes in the whole glusterfs-fuse
20:28 thigdon oh i see
20:29 thigdon right, my patch would cover only readdir
20:30 discretestates joined #gluster
20:30 ndevos yes, and that will work just fine too - I'm sure there are other pieces in need for removing PAGESIZE writes everywhere
20:31 thigdon ndevos: i agree
20:31 thigdon i think your approach is better
20:31 cjanbanan joined #gluster
20:31 ndevos thigdon: better is relative, your patch probably fixes that bug too :D
20:32 ndevos mine is not ready yet, still thinking about an elegant way to split those iovecs
20:39 siel joined #gluster
20:41 tdasilva left #gluster
20:41 robo joined #gluster
20:50 lijiejun joined #gluster
20:51 siel joined #gluster
21:01 siel joined #gluster
21:12 btreeinfinity joined #gluster
21:13 robo joined #gluster
21:14 robo joined #gluster
21:16 cjanbanan joined #gluster
21:26 robo joined #gluster
21:28 lijiejun joined #gluster
21:32 thigdon ndevos: I've unfortunately exceeded the amount of time i'd allotted myself to look at this problem. i'm going to proceed ahead with my patch on my side. i've added myself to the cc list for the bug.. i would probably be able to help test out any patch you end up developing. thanks for all the help.
21:33 ndevos thigdon: sure, it would be nice if you can attach your patch to the bug, just in case someone else needs a fix urgently
21:34 thigdon ndevos: sure thing
21:34 ndevos thanks!
21:42 yosafbridge joined #gluster
21:43 discretestates joined #gluster
21:45 kshlm joined #gluster
21:45 kshlm joined #gluster
21:45 kshlm joined #gluster
21:47 cyberbootje joined #gluster
21:47 al joined #gluster
21:48 siel joined #gluster
21:50 Pavid7 joined #gluster
21:50 efries joined #gluster
21:55 tryggvil joined #gluster
22:01 siel joined #gluster
22:01 siel joined #gluster
22:02 theron joined #gluster
22:05 kam270_ joined #gluster
22:06 lijiejun joined #gluster
22:09 cjanbanan joined #gluster
22:10 kshlm joined #gluster
22:13 coredum__ joined #gluster
22:15 kam270_ joined #gluster
22:16 lijiejun joined #gluster
22:21 sputnik13 joined #gluster
22:33 chirino joined #gluster
22:33 nightwalk joined #gluster
22:40 qdk joined #gluster
22:49 joshin What's the current preferred underlying filesystem for GlusterFS? XFS or Ext4?
22:50 semiosis xfs
22:51 semiosis afk
22:51 cyberbootje joined #gluster
22:51 lijiejun joined #gluster
22:53 fidevo joined #gluster
22:56 kam270_ joined #gluster
23:02 micu1 joined #gluster
23:02 joshin Thanks semiosis.
23:03 jmalm joined #gluster
23:14 lijiejun joined #gluster
23:17 kam270_ joined #gluster
23:20 rwheeler joined #gluster
23:20 thigdon joined #gluster
23:21 cjanbanan joined #gluster
23:28 kam270_ joined #gluster
23:29 primechuck joined #gluster
23:31 jag3773 joined #gluster
23:32 lijiejun joined #gluster
23:39 glusterbot New news from newglusterbugs: [Bug 847842] [FEAT] Active-Active geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=847842> || [Bug 786007] [c3aa99d907591f72b6302287b9b8899514fb52f1]: server crashed when dict_set for truster.afr.vol-client when compiled with efence <https://bugzilla.redhat.com/show_bug.cgi?id=786007> || [Bug 854162] Feature - Support >64 node clusters in Hadoop plugin <https://bugzilla.redhat.com/
23:45 diegows joined #gluster
23:47 kam270_ joined #gluster
23:47 coredump joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary