Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 RicardoSSP joined #gluster
00:25 RicardoSSP joined #gluster
00:44 Shdwdrgn joined #gluster
01:01 kevein joined #gluster
01:04 yinyin joined #gluster
01:24 kevein joined #gluster
01:53 nickw joined #gluster
02:23 rkbstr_wo joined #gluster
02:27 lalatenduM joined #gluster
02:28 dmojoryder joined #gluster
02:33 vpshastry joined #gluster
02:36 satheesh joined #gluster
02:36 satheesh1 joined #gluster
02:37 yinyin joined #gluster
02:47 vpshastry joined #gluster
02:47 dmojoryder joined #gluster
02:47 badone joined #gluster
02:47 eryc joined #gluster
02:52 vpshastry joined #gluster
02:52 dmojoryder joined #gluster
02:52 badone joined #gluster
02:52 eryc joined #gluster
02:56 wgao joined #gluster
02:56 lkthomas hmm
02:57 lkthomas I change the IO scheduler from deadline to CFQ, data traffic improve a lot
02:58 wgao hi, glad to hear you.
03:03 wgao hi thomas, here I created volume with two targets, they were have same gluster service, but caught an issue, prompt "gluster volume create gv0 replica 2 128.224.158.227:/export/brick1 128.224.158.233:/export/brick1
03:03 wgao Brick: 128.224.158.227:/export/brick1, 128.224.158.233:/export/brick1 one of the bricks contain the other"
03:03 glusterbot wgao: Check that your peers have different UUIDs ('gluster peer status' on both). The uuid is saved in /var/lib/glusterfs/glusterd.info - that file should not exist before starting glusterd the first time. It's a common issue when servers are cloned. You can delete the /var/lib/glusterfs/peers/<uuid> file and /var/lib/glusterfs/glusterd.info, restart glusterd and peer-probe again.
03:04 lkthomas wgao: sorry, currently I use gluster on local disk replication
03:04 wgao can you help me resolve it?
03:04 * lkthomas don't have enough knowledge to do this
03:04 wgao Ohh
03:06 wgao glusterbot has answered me , thanks a lot.
03:08 mohankumar joined #gluster
03:13 jag3773 joined #gluster
03:19 bharata joined #gluster
03:19 sjoeboo joined #gluster
03:20 Shdwdrgn joined #gluster
03:27 wgao hi, what's wrong here, logs show '' gluster volume create gv0 stripe 2 128.224.158.227:/export/brick1 128.224.158.233:/export/brick1
03:28 wgao /export/brick1 or a prefix of it is already part of a volume
03:28 glusterbot wgao: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
03:38 Skunnyk joined #gluster
03:47 jules_ joined #gluster
03:53 vpshastry joined #gluster
03:55 sgowda joined #gluster
04:05 aravindavk joined #gluster
04:10 shylesh joined #gluster
04:24 sjoeboo joined #gluster
04:25 yinyin joined #gluster
04:30 Susant joined #gluster
04:33 hagarth joined #gluster
04:48 vpshastry joined #gluster
04:49 sjoeboo joined #gluster
05:07 Susant left #gluster
05:14 sjoeboo joined #gluster
05:16 lalatenduM joined #gluster
05:19 bulde joined #gluster
05:22 deepakcs joined #gluster
05:32 yinyin joined #gluster
05:34 bala1 joined #gluster
05:44 rastar joined #gluster
05:49 rgustafs joined #gluster
05:55 mohankumar joined #gluster
06:02 vigia joined #gluster
06:14 guigui1 joined #gluster
06:16 jtux joined #gluster
06:16 saurabh joined #gluster
06:16 ricky-ticky joined #gluster
06:24 sjoeboo joined #gluster
06:33 bulde joined #gluster
06:42 mtanner_w joined #gluster
06:44 mtanner_ joined #gluster
06:48 redbeard joined #gluster
06:50 majeff joined #gluster
06:57 ekuric joined #gluster
07:06 ctria joined #gluster
07:18 ollivera joined #gluster
07:21 hybrid512 joined #gluster
07:22 vimal joined #gluster
07:24 raghu joined #gluster
07:27 StarBeast joined #gluster
07:30 tjikkun_work joined #gluster
07:35 ngoswami joined #gluster
07:38 hybrid512 joined #gluster
07:47 bulde joined #gluster
07:50 andreask joined #gluster
08:04 majeff joined #gluster
08:05 dxd828 joined #gluster
08:09 Rocky__ joined #gluster
08:23 glusterbot` joined #gluster
08:23 Rorik_ joined #gluster
08:24 twx joined #gluster
08:24 abyss^___ joined #gluster
08:24 m0zes_ joined #gluster
08:24 kbsingh joined #gluster
08:24 neofob joined #gluster
08:25 manik joined #gluster
08:25 Ramereth|home joined #gluster
08:25 hybrid5121 joined #gluster
08:26 GLHMarmot joined #gluster
08:26 purpleid1a joined #gluster
08:26 errstr joined #gluster
08:26 phix_ joined #gluster
08:26 haakon_ joined #gluster
08:26 jcastle_ joined #gluster
08:26 Azrael left #gluster
08:27 mkonecny joined #gluster
08:27 thomasle_ joined #gluster
08:27 helloadam joined #gluster
08:28 lkoranda joined #gluster
08:28 FyreFoX_ joined #gluster
08:32 atrius joined #gluster
08:36 satheesh2 joined #gluster
08:37 atrius joined #gluster
08:37 satheesh1 joined #gluster
08:40 dxd828 joined #gluster
08:42 dxd828 joined #gluster
08:43 dxd828 joined #gluster
08:56 majeff joined #gluster
09:03 StarBeast joined #gluster
09:20 duerF joined #gluster
09:21 vrturbo joined #gluster
09:26 lh joined #gluster
09:26 lh joined #gluster
09:27 glusterbot New news from newglusterbugs: [Bug 962350] leak in entrylk <http://goo.gl/9Y3JQ> || [Bug 928575] Error Entry in the log when gluster volume heal on newly created volumes <http://goo.gl/KXsmD>
09:28 satheesh joined #gluster
09:28 majeff joined #gluster
09:35 bulde joined #gluster
09:54 harish joined #gluster
09:57 glusterbot New news from newglusterbugs: [Bug 962362] Perform NULL check on op_errstr before dereferencing it <http://goo.gl/Ew7H9>
10:16 satheesh1 joined #gluster
10:18 majeff joined #gluster
10:21 rgustafs joined #gluster
10:35 andrei_ joined #gluster
10:41 StarBeast joined #gluster
10:45 edward1 joined #gluster
10:56 bulde joined #gluster
10:57 jtux joined #gluster
10:57 y4m4 joined #gluster
10:59 gbrand_ joined #gluster
11:06 gbrand__ joined #gluster
11:08 kbsingh_ joined #gluster
11:08 balunasj joined #gluster
11:08 FyreFoX joined #gluster
11:10 wgao_ joined #gluster
11:10 rastar1 joined #gluster
11:10 guigui joined #gluster
11:11 bala joined #gluster
11:12 tziOm joined #gluster
11:12 tziOm I am having trouble with accessing my gluster volume from client using nfs
11:12 tziOm nfs client just hangs forever
11:12 tziOm kernel nfs server (module) is unloaded
11:13 redbeard joined #gluster
11:13 tziOm GlusterFS 3.3.1
11:13 dmojoryder joined #gluster
11:16 helloadam joined #gluster
11:24 glusterbot joined #gluster
11:25 rgustafs joined #gluster
11:25 dmojoryder joined #gluster
11:25 duerF joined #gluster
11:25 dxd828 joined #gluster
11:25 m0zes_ joined #gluster
11:25 vigia joined #gluster
11:25 hagarth joined #gluster
11:25 aravindavk joined #gluster
11:25 badone joined #gluster
11:25 eryc joined #gluster
11:27 rgustafs joined #gluster
11:29 deepakcs joined #gluster
11:30 jbrooks joined #gluster
11:32 yinyin_ joined #gluster
11:35 yinyin- joined #gluster
11:39 rastar1 joined #gluster
11:39 bala joined #gluster
11:39 nicolasw joined #gluster
11:39 yinyin_ joined #gluster
11:40 16SABGBGN joined #gluster
11:43 andrei_ joined #gluster
11:48 andreask joined #gluster
11:48 flrichar joined #gluster
11:50 shylesh joined #gluster
11:54 manik joined #gluster
12:06 guigui joined #gluster
12:11 lhawthor_ joined #gluster
12:12 shdwdrgn_ joined #gluster
12:12 Ramereth joined #gluster
12:15 18WADHLR3 joined #gluster
12:15 majeff joined #gluster
12:16 NeatBasis joined #gluster
12:16 andrei_ joined #gluster
12:17 tjikkun joined #gluster
12:17 tjikkun joined #gluster
12:17 lalatenduM joined #gluster
12:18 tziOm seems to me quota is not enabled over nfs export?!
12:21 jbrooks joined #gluster
12:22 lh joined #gluster
12:23 sgowda joined #gluster
12:23 hagarth joined #gluster
12:27 bulde joined #gluster
12:30 chirino joined #gluster
12:33 aravindavk joined #gluster
12:33 aliguori joined #gluster
12:37 shireesh joined #gluster
12:42 Rorik joined #gluster
12:44 satheesh joined #gluster
12:44 wgao__ joined #gluster
12:45 purpleidea joined #gluster
12:45 purpleidea joined #gluster
12:45 mohankumar__ joined #gluster
12:46 shishir joined #gluster
12:51 helloadam joined #gluster
12:52 awheeler joined #gluster
13:16 _ilbot joined #gluster
13:16 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
13:32 _ilbot joined #gluster
13:32 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
13:34 aliguori joined #gluster
13:35 manik joined #gluster
13:35 GLHMarmot joined #gluster
13:37 ThatGraemeGuy joined #gluster
13:38 Shdwdrgn joined #gluster
13:43 yinyin_ joined #gluster
13:46 andreask joined #gluster
13:46 dmojoryder joined #gluster
13:47 theron joined #gluster
13:48 ThatGraemeGuy joined #gluster
13:49 ricky-ticky joined #gluster
13:55 _ilbot joined #gluster
13:55 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
13:56 GLHMarmot joined #gluster
13:57 jurrien joined #gluster
14:10 plarsen joined #gluster
14:10 kbsingh joined #gluster
14:10 martin2_1 joined #gluster
14:10 a2_ joined #gluster
14:10 tzi0m joined #gluster
14:10 vex_ joined #gluster
14:10 Shdwdrgn joined #gluster
14:10 aliguori joined #gluster
14:10 FyreFoX_ joined #gluster
14:10 dblack joined #gluster
14:10 edward1 joined #gluster
14:10 bennyturns joined #gluster
14:10 robos joined #gluster
14:10 jtux joined #gluster
14:10 Skunnyk joined #gluster
14:10 saurabh joined #gluster
14:10 awheeler_ joined #gluster
14:10 ngoswami joined #gluster
14:10 dustint joined #gluster
14:10 helloadam joined #gluster
14:10 wgao__ joined #gluster
14:10 Rorik joined #gluster
14:10 shireesh joined #gluster
14:10 hagarth joined #gluster
14:10 tjikkun joined #gluster
14:10 NeatBasis joined #gluster
14:10 majeff joined #gluster
14:10 18WADHLR3 joined #gluster
14:10 Ramereth joined #gluster
14:10 shylesh joined #gluster
14:10 flrichar joined #gluster
14:10 glusterbot joined #gluster
14:10 rastar1 joined #gluster
14:10 balunasj|mtg joined #gluster
14:10 kbsingh_ joined #gluster
14:10 StarBeast joined #gluster
14:10 atrius joined #gluster
14:10 lkoranda joined #gluster
14:10 jcastle_ joined #gluster
14:10 haakon_ joined #gluster
14:10 phix_ joined #gluster
14:10 errstr joined #gluster
14:10 hybrid5121 joined #gluster
14:10 neofob joined #gluster
14:10 abyss^___ joined #gluster
14:10 twx joined #gluster
14:10 Rocky__ joined #gluster
14:10 tjikkun_work joined #gluster
14:10 vimal joined #gluster
14:10 ollivera joined #gluster
14:10 ekuric joined #gluster
14:10 mtanner_ joined #gluster
14:10 vpshastry joined #gluster
14:10 mtanner joined #gluster
14:10 cfeller joined #gluster
14:10 DWSR joined #gluster
14:10 DEac- joined #gluster
14:10 Peanut joined #gluster
14:10 jds2001_ joined #gluster
14:10 kkeithley joined #gluster
14:10 lanning joined #gluster
14:10 mjrosenb joined #gluster
14:10 MinhP_ joined #gluster
14:10 H__ joined #gluster
14:10 SteveCooling joined #gluster
14:10 eightyeight joined #gluster
14:10 georgeh|workstat joined #gluster
14:10 johnmark joined #gluster
14:10 hflai_ joined #gluster
14:10 jurrien__ joined #gluster
14:10 martin2__ joined #gluster
14:10 hchiramm_ joined #gluster
14:10 E-T joined #gluster
14:10 msmith_ joined #gluster
14:10 wN joined #gluster
14:10 Guest82024 joined #gluster
14:10 Nuxr0 joined #gluster
14:10 johnmorr joined #gluster
14:10 penglish1 joined #gluster
14:10 irk joined #gluster
14:10 fleducquede joined #gluster
14:10 cicero joined #gluster
14:10 sr71_ joined #gluster
14:10 jiqiren joined #gluster
14:10 samppah joined #gluster
14:10 thekev joined #gluster
14:10 tru_tru joined #gluster
14:10 jiffe98 joined #gluster
14:10 ehg joined #gluster
14:10 shanks joined #gluster
14:10 bfoster joined #gluster
14:10 war|child joined #gluster
14:10 sonne joined #gluster
14:10 larsks_ joined #gluster
14:10 stoile joined #gluster
14:10 clutchk joined #gluster
14:10 primusinterpares joined #gluster
14:10 ninkotech__ joined #gluster
14:10 stopbit joined #gluster
14:10 Zengineer joined #gluster
14:10 frakt joined #gluster
14:10 MattRM joined #gluster
14:10 portante` joined #gluster
14:10 roo9 joined #gluster
14:10 semiosis joined #gluster
14:10 hagarth__ joined #gluster
14:10 chlunde_ joined #gluster
14:10 ingard__ joined #gluster
14:10 yosafbridge` joined #gluster
14:10 tdb- joined #gluster
14:10 arusso joined #gluster
14:10 JordanHackworth_ joined #gluster
14:10 VeggieMeat_ joined #gluster
14:10 juhaj joined #gluster
14:10 red_solar joined #gluster
14:10 cyberbootje1 joined #gluster
14:10 NeonLich1 joined #gluster
14:10 Kins joined #gluster
14:10 ndevos joined #gluster
14:10 Uzix joined #gluster
14:10 the-me joined #gluster
14:10 Norky_ joined #gluster
14:10 pull joined #gluster
14:10 codex joined #gluster
14:10 redsolar_office joined #gluster
14:10 avati_ joined #gluster
14:10 xavih_ joined #gluster
14:10 __NiC joined #gluster
14:10 Supermathie joined #gluster
14:10 edong23_ joined #gluster
14:10 foster joined #gluster
14:10 gluslog joined #gluster
14:10 coredumb joined #gluster
14:10 JZ_ joined #gluster
14:10 Dave2 joined #gluster
14:10 bdperkin joined #gluster
14:10 soukihei joined #gluster
14:10 mriv joined #gluster
14:10 abelur joined #gluster
14:10 zwu joined #gluster
14:10 JoeJulian joined #gluster
14:10 nat joined #gluster
14:10 Gugge_ joined #gluster
14:10 lkthomas joined #gluster
14:10 premera joined #gluster
14:10 xymox joined #gluster
14:10 zykure|uni joined #gluster
14:10 VSpike joined #gluster
14:10 stigchristian joined #gluster
14:10 partner joined #gluster
14:10 _Bryan_ joined #gluster
14:10 ricky-ticky joined #gluster
14:11 manik joined #gluster
14:12 badone joined #gluster
14:12 redbeard joined #gluster
14:12 tjikkun_work joined #gluster
14:12 bugs_ joined #gluster
14:13 failshell joined #gluster
14:14 neofob left #gluster
14:21 daMaestro joined #gluster
14:23 chirino joined #gluster
14:23 rkbstr_wo joined #gluster
14:23 andrewjsledge joined #gluster
14:24 jclift joined #gluster
14:24 guigui joined #gluster
14:26 failshell hello i keep getting that error in my logs: 0-tcp.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1016)
14:26 failshell i can't find anything in the config that has that port configured
14:26 failshell can anyone enlighten me?
14:26 badone joined #gluster
14:27 bala joined #gluster
14:27 failshell my google fu is not finding any solution to that
14:27 johnmorr_ joined #gluster
14:27 mohankumar__ joined #gluster
14:28 georgeh|workstat joined #gluster
14:28 glusterbot New news from newglusterbugs: [Bug 962450] POSIX ACLs fail display / apply / set on NFSv3 mounted Gluster filesystems <http://goo.gl/nRrHg>
14:29 MinhP joined #gluster
14:29 SteveCooling joined #gluster
14:30 mjrosenb joined #gluster
14:30 sjoeboo joined #gluster
14:32 zaitcev joined #gluster
14:32 johnmark_ joined #gluster
14:33 DEac-_ joined #gluster
14:34 lpabon joined #gluster
14:36 fleducquede joined #gluster
14:36 jim` joined #gluster
14:37 Peanut joined #gluster
14:38 tjikkun_work joined #gluster
14:40 eightyeight joined #gluster
14:41 awheeler_ failshell: What version of glusterfs are you using?
14:41 failshell awheeler_: 3.2.7
14:41 failshell from EPEL
14:43 awheeler_ Not sure why EPEL never got updated with the 3.3 series, but if you can switch to that it might help -- http://repos.fedorapeople.or​g/repos/kkeithle/glusterfs/
14:43 glusterbot <http://goo.gl/EyoCw> (at repos.fedorapeople.org)
14:44 failshell i was told i need to stop the cluster completely to upgrade to 3.3
14:44 failshell is that true?
14:44 awheeler_ Ah, I do not know.  I know it will automatically stop on each node, but I've not done that upgrade.
14:45 awheeler_ That would be inconvenient for sure.
14:45 failshell well, right now, its only used for backups and i have the data elsewhere, so no biggie
14:45 failshell i have one in prod to serve web data, that one would be more annoying
14:47 Chiku|dc joined #gluster
14:47 awheeler_ Looking at my logs, I see I am getting similar errors (though different ports) so it seems unlikely that an upgrade will resolve it.  I gather otherwise all seems fine?
14:48 failshell awheeler_: im trying to rebalance the data after adding 2 bricks
14:48 failshell and it doesnt seem to work
14:48 Chiku|dc hi
14:48 glusterbot Chiku|dc: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:48 awheeler_ That is probably fixed.  But you'd need confirmation from someone else.
14:48 Chiku|dc about gluster 3.4 "Quorum for split-brain resolution now supports replica 2 configurations" <-- how does it works ?
14:49 Chiku|dc quorum with 2 replicated bricks ?
14:51 failshell awheeler_: when i check the status, its stuck at 'rebalance step 1: layout fix in progress' and its only available on one brick
14:51 failshell so something's not working right with my cluster
14:54 brian_ joined #gluster
14:55 brian_ hello
14:55 glusterbot brian_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:55 brian_ I'm new to bluster and I just set up this IRC chat
14:55 brian_ gluster*
14:56 brian_ anyone here?
14:56 brian_ joined #gluster
14:58 brian_ My question is about setting up. I have a cluster (head + 3 nodes) already installed and I want to make 2 of the nodes gluster nodes. Since there are already filesystems on the disks using the entire disks, does this mean I will need to re-partition the existing filesystem to make room for the "bricks" for gluster to be installed?
14:59 andreask no
14:59 andreask you only need a filesystem with exended attributes enabled
15:00 brian_ I'm really a noob with this, so I appreciate any help you guys can provide
15:00 Nuxr0 brian_: you can just use a directory in your current filesystem
15:01 brian_ ok so I don't need to run the "mkfs.xfs -i size=512 /dev/sdb1" (or in my case sda), to set up a new partition then?
15:02 brian_ so I'll just make a directory in home called "gluster" , to start with
15:03 Nuxr0 yes, you can start with that, i do not believe size=512 is really required; ideally, if you start from scratch is best to do it
15:03 jthorne joined #gluster
15:03 brian_ ok so my gluster directory that I create will be the actual "brick", correct?
15:05 semiosis ~glossary | brian_
15:05 glusterbot brian_: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
15:06 majeff1 joined #gluster
15:11 majeff joined #gluster
15:14 brian_ I'm following the "Getting Started", (here: http://www.gluster.org/community/docume​ntation/index.php/Getting_started_rrqsg ). Since i don't need to make a filesystem (because I'm taking your suggestion to use a directory instead), the commands to mount the partition as a Gluster "brick" should mount my directory I made correct?
15:14 glusterbot <http://goo.gl/uMyDV> (at www.gluster.org)
15:18 bchilds joined #gluster
15:18 awheeler_ failshell:  Hmm, dunno hopefully someone else knows about that.  Did you look at the bugs list: https://bugzilla.redhat.com/buglist.cgi?list_id​=1360973&amp;short_desc=rebalance&amp;classific​ation=Community&amp;query_format=advanced&amp;b​ug_status=NEW&amp;bug_status=ASSIGNED&amp;short​_desc_type=allwordssubstr&amp;product=GlusterFS
15:18 glusterbot <http://goo.gl/3BZDJ> (at bugzilla.redhat.com)
15:20 failshell awheeler_: im not even getting to that point
15:20 failshell it never starts moving files
15:20 xavih joined #gluster
15:21 awheeler_ failshell: This perhaps: https://bugzilla.redhat.com/show_bug.cgi?id=764823
15:21 glusterbot <http://goo.gl/TRG1s> (at bugzilla.redhat.com)
15:21 glusterbot Bug 764823: medium, medium, ---, raghavendra, CLOSED CURRENTRELEASE, rebalance fails with "transport endpoint not connected" in 3.2.1 rdma set-up
15:21 awheeler_ your in 3.2.7, so probably not.  But there might be a relevant bug in there.
15:22 failshell i guess ill upgrade
15:22 failshell hopefully, that will fix it
15:22 failshell im worried data balancing doesnt work ..
15:24 majeff joined #gluster
15:24 majeff joined #gluster
15:25 awheeler_ I do know that balancing works in 3.3.
15:32 sjoeboo_ joined #gluster
15:35 H__ pick release-3.3 branch latest. otherwise you hit the FD leak bug. Also head has working min.diskfree
15:35 ctria joined #gluster
15:54 vpshastry joined #gluster
15:55 al_ joined #gluster
15:58 al joined #gluster
16:00 brian_ i have installed all of these packages (on the head and 3 nodes) glusterfs, glusterfs-devel, glusterfs-fuse, glusterfs-geo-replication, glusterfs-rdms (I have infiniband), and glusterfs-server. My question is, which of these services should be started on the head and which should be started on the 3 nodes?
16:01 vpshastry left #gluster
16:08 failshell awheeler_: is it long to rebalance? i have about 100GB, its been running for ~10mins now, hasnt moved a file yet
16:08 snarkyboojum joined #gluster
16:10 failshell awheeler_: my continus errors are gone also
16:10 brian_ right now I'm just trying to follow the first instructions in the "Getting Started configure" section under "Configure trusted pool"… Right now I have the glusterd daemon running on the head, but when I try to run the "gluster peer probe" command on the head, I get back the following error… Probe unsuccessful
16:10 brian_ Probe returned with unknown errno 107
16:13 kkeithley1 joined #gluster
16:14 kkeithley1 ls
16:14 brian_ ok.. got past that…
16:15 brian_ sorry guys.. I'm a complete Noob to this
16:18 brian_ brb
16:18 brian_ left #gluster
16:18 fleducquede joined #gluster
16:20 brian_ joined #gluster
16:20 saurabh joined #gluster
16:21 brian_ does both the glusterd and glusterfsd services have to be enabled on the head and nodes?
16:24 ctria joined #gluster
16:25 failshell ive started a rebalance on 3.3.1 30 mins ago. status says inprogress for each brick. is it normal to take that much time? i have roughly 100GB of data. no errors anywhere.
16:25 hagarth joined #gluster
16:28 matclayton joined #gluster
16:28 Mo__ joined #gluster
16:29 JoeJulian yes
16:30 failshell how long can i expect it to be?
16:31 failshell JoeJulian: yes to be or brian_?
16:32 lalatenduM joined #gluster
16:33 lpabon_ joined #gluster
16:35 Nuxr0 failshell: I think for both :)
16:35 m0zes_ joined #gluster
16:36 JoeJulian yes to failshell. I'm on phone calls and haven't had a chance to look at scrollback yet. :D
16:36 failshell ok ill be patient then
16:36 JoeJulian I'm pissed at Brother and am going to get some satisfaction from this phone call if it's the last thing I do today.
16:36 failshell lol
16:37 failshell upgrading to 3.3.1 solved most of my issues
16:37 brian_ Success! I finally got a Gluster volume up and running
16:37 failshell but now i cant mount read-only :(
16:37 failshell apparently that's fixed in 3.4.0
16:38 brian_ I'm just wondering now what services I need to chkconfig to "on" on the head and nodes now… I'm guessing I need both running on everything...
16:40 JoeJulian Just glusterd
16:40 failshell is there a backward compatibility between 3.3.1 and 3.2.1 ? i can't mount a cluster that i cant upgrade just yet
16:40 JoeJulian failshell: no
16:42 failshell JoeJulian: concerning my rebalancing. shouldnt the scanned counter going up?
16:44 JoeJulian I think so
16:44 brian_ ok thanks
16:44 JoeJulian "We don't support Linux." - Brother Industries, Inc. "Bullshit" - Me
16:45 failshell failed to set the volume (Permission denied)
16:46 failshell i get that error in my rebalance log
16:46 failshell SETVOLUME on remote-host failed: Authentication failed
16:46 awheeler_ joined #gluster
16:47 johnmorr joined #gluster
16:48 failshell here we go
16:48 failshell the bricks need to be added to auth.allow
16:49 semiosis brian_: glusterfs doesnt have head/nodes, it's fully distributed.
16:49 duerF joined #gluster
16:50 manik joined #gluster
16:50 brian_ ok, so the glusterd service doesn't need to run on my head node?
16:50 semiosis uhhh i guess not
16:50 semiosis ?
16:51 JoeJulian "If you have multiple printers all experiencing the same problem, check with your switch support." - Brother
16:51 dxd828 joined #gluster
16:51 sjoeboo_ joined #gluster
16:52 JoeJulian "Really? 30 printers in 25 different locations all having a problem because of 1 switch? Explain to me how that's possible and I'll listen to anything you have to say."
16:53 eryc joined #gluster
16:53 eryc joined #gluster
16:54 awheeler joined #gluster
16:54 JoeJulian I know I'm off topic, but I had to vent to my friends.
16:55 vpshastry joined #gluster
16:57 semiosis JoeJulian: i've given up on tech support.  finger pointing, time wasting, ...
16:58 semiosis unless it's an RMA/warranty call, of course.  but never for diagnosis.
16:59 Chiku|dc hi semiosis on your ppa there is no 3.4beta for lucid ?
16:59 semiosis lol
16:59 semiosis no lucid
16:59 saurabh joined #gluster
17:00 JoeJulian Unfortunately I have the 30-35 in-use and another 10 in the warehouse as stand-by spares. We can't afford to just throw them all out and replace them.
17:00 brian_ I have created a replicated volume (which I want to switch to a distributed volume). I've tried undoing things by removing each brick one at a time with: gluster volume remove gv0 replica 1 node01:/export/brick1 …. etc… but I've not been able to delete the volume… What else do I need to do if I want to start over and re-create the volume as distributed instead of replicated?
17:01 semiosis JoeJulian: tell the brother people that, and demand escalation. they shouldn't treat you like someone who bought one printer at staples
17:01 JoeJulian To start over, stop and delete the volume, delete the brick directories, then create your volume again.
17:02 brian_ ok, I'll try that
17:02 brian_ thanks Joe
17:02 JoeJulian semiosis: Oh, I am... I hate dealing with the first-level $14/hr guys.
17:02 semiosis +1
17:05 JoeJulian I love saying (quite nicely) to them, "I mean no offence or judgment of you as a person, but you have no idea what you're talking about. Can we please move me up to level 3?"
17:07 ackjewt joined #gluster
17:12 brian_ Joe: ok I have managed to remove 2 out of the 3 bricks. But I can't seem to get the last one removed… When I run this: gluster volume remove-brick gv0 replica 1 node02:/export/brick1
17:12 bulde joined #gluster
17:12 brian_ I get back this: replica count (1) option given for non replicate volume gv0
17:13 snarkyboojum_ joined #gluster
17:13 johnmark joined #gluster
17:13 kkeithley1 joined #gluster
17:13 mjrosenb_ joined #gluster
17:14 brian_ 'gluser volume info'  shows this:
17:14 DEac- joined #gluster
17:14 MinhP_ joined #gluster
17:14 brian_ Volume Name: gv0
17:14 brian_ Type: Distribute
17:14 brian_ Volume ID: f89d2a0c-519d-407c-e96c-a847d10bb000
17:14 brian_ Status: Stopped
17:14 brian_ Number of Bricks: 1
17:14 brian_ Transport-type: tcp
17:14 brian_ Bricks:
17:14 brian_ Brick1: node02:/export/brick1
17:15 brian_ my nodes in this volume were node02 node03 and node04… (node01 is offline for repairs)
17:15 brian_ I never added it to begin with
17:16 zaitcev_ joined #gluster
17:17 krishna_ joined #gluster
17:18 brian_ nevermind.. i think i know what this is
17:18 awheeler failshell: No idea, but you should see disk activity and space being consumed
17:19 brian_ apparently when I removed the other to volumes, it automatically changed it into an distrubted volume
17:19 kaptk2 joined #gluster
17:22 aliguori joined #gluster
17:22 vpshastry left #gluster
17:23 neofob joined #gluster
17:23 bennyturns joined #gluster
17:24 kaptk2 joined #gluster
17:24 jthorne joined #gluster
17:47 _ilbot joined #gluster
17:47 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
17:48 theron joined #gluster
17:48 andrewjsledge joined #gluster
17:49 saurabh joined #gluster
17:50 failshel_ joined #gluster
17:52 badone_ joined #gluster
17:53 bulde joined #gluster
17:54 redbeard joined #gluster
17:58 dmojoryder joined #gluster
17:58 vigia joined #gluster
17:58 rwheeler joined #gluster
17:58 ThatGraemeGuy joined #gluster
17:58 hagarth joined #gluster
17:58 eryc joined #gluster
17:58 mjrosenb_ joined #gluster
17:58 krishna_ joined #gluster
17:58 jurrien joined #gluster
17:59 a2_ joined #gluster
18:00 vpshastry1 left #gluster
18:08 jiffe98 when I add sync,noac to the local nfs mount it slows down even worse than mounting off gluster's nfs server
18:10 Supermathie jiffe98: I've had sufficient success with http://fpaste.org/11892/68468597/
18:10 glusterbot Title: #11892 Fedora Project Pastebin (at fpaste.org)
18:13 jiffe98 Supermathie: that does seem to load a bit faster
18:14 bronaugh_ joined #gluster
18:14 bronaugh_ hey; so, what's the 3.5 plan?
18:14 jiffe98 seems to noac option slows things down, if I mount local nfs with that option it is quite a bit slower but with it the site loads pretty quick
18:15 jiffe98 without it the site loads quick I mean
18:16 Supermathie jiffe98: Did you start an nfs client mount on your gluster servers?
18:17 jiffe98 Supermathie: the nfs clients are not the same machine as the gluster servers
18:18 JoeJulian bronaugh_: in 3.5 you'll get to choose the exact blend and roast of beans you get in your espresso.
18:22 bronaugh_ JoeJulian: nice, but I don't drink coffee :P
18:22 bronaugh_ JoeJulian: nah, seriously. wondering if there's a plan yet.
18:22 JoeJulian Maybe you just haven't had the right cup yet!
18:22 JoeJulian I don't think feature planning will happen before 3.4 is released.
18:23 bronaugh_ ok.
18:23 kkeithley yeah, step 1 of the 3.5 plan is release 3.4
18:23 kkeithley there is no step 2 yet
18:23 bronaugh_ fair enough :) wasn't sure exactly how you folks were handling that.
18:23 JoeJulian step 3, profit!
18:24 JoeJulian bronaugh_: If you have a specific feature request, though, feel free to create a wiki page.
18:24 JoeJulian and/or a file a bug report
18:24 glusterbot http://goo.gl/UUuCq
18:24 bronaugh_ JoeJulian: points made :)
18:24 phox joined #gluster
18:25 phox Hi.  How can one force a server name change?
18:25 JoeJulian use cnames?
18:26 JoeJulian Once associated with a brick, there's no good way to do that. (file a bug report as an enhancement request maybe?) but you can do that offline...
18:26 glusterbot http://goo.gl/UUuCq
18:26 phox No.  I was testing stuff on this hardware with a different hostname, and now it's going into production and has a permanent hostname... but it doesn't seem to be coming up correctly with the hostname it's been assigned... I ran 'gluster peer probe new-hostname' from another machine and it was ok, but...
18:26 phox I'm not sure if it thinks it's still associated with the brick
18:27 phox the bricks were gone by the time gluster was installed on this machine, and there are no other servers in this... pool(?), but somehow it knows stuff about the old brick
18:27 JoeJulian Stop your volumes and all your glusterd. On each server:  find /var/lib/glusterd | xargs sed -i 's/oldservername/newservername/g'
18:27 phox even though there should be nothing on any local filesystem that says anything about it
18:27 phox k
18:27 bronaugh_ JoeJulian: you sure you have to stop the server for that?
18:27 phox bronaugh_: I'd assume so
18:28 bronaugh_ JoeJulian: ie: does it reread the file, or does it do some funny in-memeory caching of it?
18:28 JoeJulian I'm not sure, no.
18:28 bronaugh_ in-memory*. christ, typo monday.
18:28 JoeJulian You need coffee. ;)
18:28 bronaugh_ :P
18:28 bronaugh_ well played :P
18:30 phox JoeJulian: heh, now it appears that gluster doesn't want to start
18:31 phox JoeJulian: can I just nuke everything in that dir and start fresh, I guess?
18:31 bronaugh_ you know the biggest feature request I'd like to make? not having to restart the server for -everything-
18:32 JoeJulian You don't need to restart the server for almost anything. The only reason my suggestion requires it is because we're doing something that's not supported in the software.
18:32 JoeJulian phox: You sure can
18:32 brian_ I have (I think), the gluster native client runing and I'm trying to mount my new gluster volume but it's failing. I'm using the command format: mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR .. Is the HOSTNAME-OR-IPADDRESS section the IP of the head node? I think what I need to do is mount the gluster volume from the head correct?
18:33 JoeJulian there is no "head node" all servers are peers. Any server will do.
18:33 JoeJulian @mount server
18:33 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
18:33 phox there now it started
18:34 phox yeah I think that regexp kinda fubar'd things, FWIW
18:34 phox didn't matter at all in my case.... better to start fresh anyways because that's what I'm documenting for...
18:34 JoeJulian Heh, I've done it before, but you would need to be sure you know what you're doing when you mess with things.
18:35 JoeJulian Yes, documenting from a fresh start is a good thing.
18:35 phox so, tried again, and I'm still getting this:  Host atlas-ib not a friend
18:35 JoeJulian When I puppetize, I always format and ensure that the system comes up from scratch.
18:35 phox even though I just did gluster peer probe atlas-ib on another machine
18:36 phox ... not that I actually want peering so I'll try that locally or something.
18:37 phox hm.  so, how do I get a standalone gluster server to realize what its hostname is...
18:40 JoeJulian good question... I've never really looked into how that happens.
18:40 phox well, I can peer probe and then peer detach it
18:40 phox so that takes care of hat
18:40 JoeJulian I know you can solve it by adding your hostname to /etc/hosts
18:41 phox now the other server sees it as atlas-ib, but it's still giving me this uninformative crap about "atlas-ib is not a friend
18:41 phox well, except it's multi-homed
18:41 phox so that might get a bit weird
18:41 JoeJulian yeah...
18:41 phox oh hey
18:41 phox yeah that's probably the problem
18:41 phox /etc/hosts is stale on this box
18:41 phox oops.
18:41 phox thanks, made me catch that
18:42 phox tada.
18:42 brian_ Joe: ok, but the node that I want to mount FROM, needs to have the gluster client running on it correct? I was going to mount from the head so that when the mount is sucessful, i could then share the mounted directory to the nodes… not sure if this is the general practice or not but it's what I was going to try.
18:46 robos joined #gluster
18:51 JoeJulian ~glossary | brian
18:51 glusterbot brian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
18:51 JoeJulian So your clients will mount the volume from a server.
19:03 brian_ so if my nodes all have bricks installed (they are at directory /gluster/brick1) on each node, what command format would I use to mount them with..
19:03 brian_ ?
19:09 JoeJulian bricks != volumes
19:10 brian_ ok.. my volume is gv0
19:10 JoeJulian ~pasteinfo | brian_
19:10 glusterbot brian_: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:10 JoeJulian Ah, ok, nevermind the paste
19:10 brunoleon joined #gluster
19:10 JoeJulian mount -t glusterfs myawesomeserver:gv0 /mnt/gluster/gv0
19:10 JoeJulian Assuming you want it mounted in /mnt/gluster/gv0
19:11 brian_ I made a directory on one of the nodes called gluster-mount-dir
19:12 brian_ I'll try it with your command
19:13 brian_ ok I ran this from node03 (one of the brick nodes)
19:13 brian_ mount -t glusterfs node03:gv0 /gluster-mount-dir
19:14 brian_ It failed.. I paste the output into the dpates.org site
19:14 brian_ output of the log file that is
19:15 brian_ http://fpaste.org/11909/36847252/
19:15 glusterbot Title: #11909 Fedora Project Pastebin (at fpaste.org)
19:16 rwheeler joined #gluster
19:18 lpabon joined #gluster
19:21 aliguori joined #gluster
19:22 brian_ joe: in your previous command, the "myawesomeserver" can be any server I have as part of the gv0 volume right?
19:22 MrNaviPacho joined #gluster
19:23 JoeJulian right
19:23 brian_ ok. did that paste from my log file make any sense?
19:23 MrNaviPacho Is xfs the recommended file system now?
19:24 JoeJulian brian_: transport-type 'rdma' is not valid or not found on this machine
19:24 JoeJulian MrNaviPacho: yes
19:24 brian_ yeah, I have ib running on eveything
19:25 brian_ do I need to load a module for this?
19:25 JoeJulian brian_: did you install glusterfs-rdma (if you're using rpms)
19:26 brian_ I saw that error but I don't know what it means
19:26 brian_ yep I'm pretty sure I did
19:26 JoeJulian "/usr/lib64/glusterfs/3.3.1/rpc-transport/rdma.so: cannot open shared object file: No such file or directory" says no.
19:26 cekstam joined #gluster
19:26 brian_ [root@node02 /]# rpm -qa rdma
19:26 brian_ rdma-3.6-1.el6.noarch
19:27 JoeJulian that would be a no then.
19:27 JoeJulian yum install glusterfs-rdma
19:27 brian_ I know I installed one package with rdma… I guess I need more than one.. :)
19:27 JoeJulian :)
19:28 brian_ and this will need to be on every node in the volume right?
19:28 JoeJulian yes
19:28 JoeJulian Every server and I would presume every client that's going to access the volume over rdma
19:29 brian_ k… trying it now… btw, thanks for all your help joe…
19:29 JoeJulian You're welcome.
19:29 JoeJulian semiosis: Yay, got a Brother engi to call me back.
19:30 semiosis huzzah!
19:36 Goatbert joined #gluster
19:41 brian_ well that solved that problem.. now new errors… getting a lot of "connection refused" in the logs after the mount hangs
19:41 brian_ maybe a port is closed
19:41 brian_ does gluster use standard ports?
19:41 JoeJulian @ports
19:41 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
19:42 brian_ ok so I need to open all those ports
19:43 brian_ I guess I'll start by just shutting down the firewall on all the nodes first to see if that is it.. :)
19:46 phox JoeJulian: so that's a separate package now?  RDMA that is...
19:46 phox JoeJulian: is it back to being kinda-usable and maybe stable-ish and stuff?
19:47 JoeJulian Has been as a yum repo since 3.1, iirc
19:47 JoeJulian yes, kkeithley has the patches in to fix rdma in 3.3.1.
19:49 bennyturns joined #gluster
19:51 phox ok.  is that in the gluster.org 3.3.1 branch now too maybe?
19:52 phox working RDMA would be kinda sweet.  actually get some serious performance going on again and stop caring so much about it being FUSE :)
19:54 andreask joined #gluster
19:57 semiosis wat?
19:58 semiosis infiniband & ssds could be so fast that fuse would *become* a bottleneck
19:58 semiosis usually fuse/cpu are oom faster than ethernet & spinning disks
19:58 semiosis s/oom/orders of magnitude/
19:59 semiosis glusterbot: wake up
19:59 Supermathie Yeah don't need IB to saturate gluster
19:59 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
19:59 Supermathie but I'm getting IB tests underway in the next little while
20:00 Supermathie It looks as though a *lot* of the time glusterfs is spending is writev() to the network, so IB should help a lot with that
20:06 a2_ Supermathie, gluster uses 100% nonblocking network IO, it shouldn't be spending time in writev() just because network is slow
20:07 jmh_ joined #gluster
20:07 Supermathie a2_: "slow" :) Yeah, thinking about sync write or commits
20:09 a2_ neither sync writes nor commits should have any impact on how much time gluster spends doing writev() to the network
20:10 Supermathie Haven't really done any intelligent profiling, so I'm just seeing an aggregation across all threads anywyas.
20:11 vigia joined #gluster
20:23 andrei_ joined #gluster
20:24 thomasle_ joined #gluster
20:24 Supermathie Oracle on kNFS, 3825 tps, 3340B redo/trans, 4.248ms avg trans latency. Not bad.
20:25 Supermathie I think :
20:28 lsouljacker joined #gluster
20:28 jurrien joined #gluster
20:28 krishna_ joined #gluster
20:28 mjrosenb_ joined #gluster
20:28 eryc joined #gluster
20:28 hagarth joined #gluster
20:28 ThatGraemeGuy joined #gluster
20:30 glusterbot New news from newglusterbugs: [Bug 961856] [FEAT] Add Glupy, a python bindings meta xlator, to GlusterFS project <http://goo.gl/yCNTu>
20:32 brian_ Joe: I'm still getting (Connection Refused) in my gluster log file while trying to mount.. (see here: http://paste2.org/MOnO2fsK).. I have shut off iptables on all my nodes.. any ideas on this one?
20:32 glusterbot Title: Paste2.org - Viewing Paste MOnO2fsK (at paste2.org)
20:33 bronaugh_ heh; so totally OT question...
20:33 bronaugh_ who's using backup software here, and what are you using?
20:33 phox free backup software, that is.
20:33 bronaugh_ (the next question is, why)
20:33 phox nobody likes nonfree crap
20:34 Supermathie bronaugh_: bacula == win. If nonfree, TSM.
20:34 phox or, ok, nonfree too
20:34 phox TSM is kinda crusty
20:34 phox we're using it now
20:34 phox heh
20:34 bronaugh_ Supermathie: ok, good to know. why bacula, and why tsm?
20:35 Supermathie bacula: library support, linux/win/osx/bsd support, possible ent support
20:36 Supermathie i'm 1H at the moment :p
20:36 bronaugh_ "ent" support?
20:36 Supermathie erprise
20:36 bronaugh_ k
20:37 bronaugh_ so what does TSM do (that's useful) that bacula does not?
20:40 Supermathie incrementals forever, backup set export, ...
20:40 phox bacula has limits on incrementals or something?
20:41 Supermathie not as such
20:41 lsouljacker I've got a bit of a problem.  I'm not sure its gluster but I have no idea where to start diagnosing.
20:42 lsouljacker Our developers are complaining of an intermittent exception on reading files from a glusterfs volume.  Glusterfs logs no errors I can find.
20:45 a2_ lsouljacker, what's the exact exception?
20:45 awheeler_ joined #gluster
20:45 phox Supermathie: are you aware of either handling obvious things like moves/renames?
20:46 matclayton joined #gluster
20:46 phox Supermathie: we were looking at lsyncd or whatever, which uses inotify to handle that, but otherwise it's kind of sucky
20:46 a2_ not sure how lsyncd handles changes which happened when it was not running
20:47 phox a2_: I don't believe it's intended to handle that
20:47 a2_ ok
20:47 phox a2_: of course anything running against a database could use inode numbers
20:47 lsouljacker a2_: exception message: Stream closed java.io.IOException: Stream closed at java.io.BufferedInputStream.getBufI​fOpen(BufferedInputStream.java:162)
20:48 phox kinda short-circuits the normal deduplication resolution mechanics...
20:48 a2_ lsouljacker, do you have more details? a fuller stack trace?
20:48 a2_ lsouljacker, is this a java app running against a FUSE mount?
20:49 lsouljacker a2_: I do but I can't paste it due to security concerns. and yes its a glusterfs fuse mount.
20:49 lsouljacker a2_: we're on centos6.4 gluster 3.3.2
20:49 a2_ lsouljacker, i need the system call which failed and the errno returned
20:49 phox can Java give you the actual ERRNO so this can stop being a Java problem? :P
20:49 phox yeah, what a2_ said
20:50 phox Java needs to start understanding that real life doesn't give a crap about Java exception classes, just about _why_ Java produced that exception
20:52 lsouljacker a2_: any idea how I could get that information...
20:54 badone joined #gluster
20:57 phox lsouljacker: #java
20:57 phox *cough*
21:00 lsouljacker Oh I'm just the poor infrastructure guy.  I'll have to go back to the developers and see if they can figure out how to get that information.
21:00 phox I'd expect them to provide useful info like that.
21:01 phox "ok what did the kernel actually tell your Fisher Price language?"
21:02 lsouljacker phox: that'd be too easy.
21:03 lsouljacker okay thanks guys im going to go talk to the devs and see if I can get that information somehow... I'll probably be back in 3 weeks :v
21:06 phox heh
21:06 phox well, in the mean time they can live with their language not working if it's going to be that opaque :)
21:07 phox sucks when you get stuck with such a Sun-ny outlook on life :D
21:08 lsouljacker :v
21:09 lsouljacker Anyways thanks phox and a2_
21:09 phox gl
21:09 lsouljacker I'll need it :/
21:10 jiffe98 so right now we are using the quota system in order to get disk usage for our users, even though we have no explicit quota set on them, is there a good way to do this with gluster?
21:11 phox FWIW we're just using quotas on the underlying filesystem but we're also not using striping which would break that
21:11 phox of course don't take my answer as authoritative.  been doing systems for a long enough while, but I'm relatively new to gluster.
21:11 * JoeJulian likes phox. He knows how to spell.
21:12 phox JoeJulian: I think you commented on me pulling a "your" and "you're" together in the same sentence correctly at some point.
21:12 JoeJulian hehe
21:12 phox Or maybe that was another freenode channel?  Somesuch.
21:12 JoeJulian You didn't misspell striping either.
21:12 phox Might have been ##c.
21:12 phox hahaha.
21:12 phox I wish my filesystem had stripping.
21:12 JoeJulian hehe
21:12 phox although I don't like that to be so blocky
21:12 jiffe98 I don't think you want to see a filesystem strip
21:12 phox mirroring is cool though.
21:12 JoeJulian Oooh, baby... show me your platters...
21:13 * JoeJulian punishes my loose stripped drive.
21:14 a2_ or nowadays, "how big is your erasure block?"
21:14 JoeJulian I'm having way too much fun today for it to be Monday.
21:31 krishna_ joined #gluster
21:34 chirino_m joined #gluster
21:45 brian_ is it possible to change from rdma to tcp for my volume transport without having to recreate the whole volume again?
21:52 brian_ also, all of my brick names on the nodes are called brick1 (the same), will this cause problems or is it necessary to name each brick on each of the nodes to be differenct? (i.e. brick1 brick2 brick3 ..etc)?
21:53 phox JoeJulian: it's too bad raid0 and raid1 are the way around they are or I'd have some good material about "RAID$1" and "stripping"
21:54 phox :(
21:55 bronaugh_ hmmm
21:55 bronaugh_ glusterfs is being impressively slow to list files here.
21:55 JoeJulian brian_: Not sure about the transport change. Since a brick consists of {server}:{path} it doesn't matter if the path is the same as long as the servers are different.
21:56 JoeJulian ~meh | bronaugh_
21:56 glusterbot bronaugh_: I'm not happy about it either
21:56 brian_ ok thanks
21:56 phox hah
21:56 bronaugh_ just wondering about the "why" is all.
21:56 phox wonder if it's any better with RDMA
21:56 phox probably too many protocol round-trips which doesn't jive well with FUSE
21:57 JoeJulian bronaugh_: Check your client log.
21:57 phox JoeJulian: wherezat?  not glusterd/ I presume
21:58 JoeJulian /var/log/glusterfs/{mountpoint | tr '/' '-'}.log
21:58 phox k
21:59 bronaugh_ Jhttp://hpaste.org/87927
21:59 glusterbot Title: RDMA fun problems. :: hpaste — Haskell Pastebin (at hpaste.org)
21:59 bronaugh_ fun times.
21:59 bronaugh_ looks like a configuration problem.
21:59 phox nothing exciting in it
21:59 bronaugh_ can't connect to ""
22:00 bronaugh_ plus it's trying to use RDMA which is verboten.
22:00 bronaugh_ so that's the reason perf is sucking.
22:00 phox RDMA kinda exists again
22:00 JoeJulian Wife: "I bought a book called [whatever it was called]. And also a cheap pineapple corer/slicer." Me: "Wow, that's an amazing book." HAHAHA!
22:00 phox not in our deployment I don't think but I think it's available somewhere
22:01 phox that's kinda like "I got a Harley for my wife"
22:01 JoeJulian rdma should be fixed in the ,,(yum repo)
22:01 glusterbot kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
22:01 phox but no debian =/
22:01 phox any speculation as to how long that wil take to trickle uphill?
22:01 phox it's certainly be nice
22:01 phox *'d
22:02 phox or I guess I could beat it with alienn.
22:02 JoeJulian semiosis: doesn't pull individual packages. Test 3.3.2 or 3.4.0 and see if they work for you.
22:02 JoeJulian s/://
22:02 glusterbot What JoeJulian meant to say was: semiosis doesn't pull individual packages. Test 3.3.2 or 3.4.0 and see if they work for you.
22:02 semiosis o_O
22:03 JoeJulian s/packages/patches/
22:03 glusterbot What JoeJulian meant to say was: semiosis: doesn't pull individual patches. Test 3.3.2 or 3.4.0 and see if they work for you.
22:03 JoeJulian too many conversations at once. I think I've hit my limit.
22:03 phox you need more execution stacks.
22:04 semiosis JoeJulian promptly packages patches pulled properly
22:04 JoeJulian hehe
22:04 premera_g joined #gluster
22:06 phox JoeJulian: so what would be your guess as to how fast those fixes might flow back into trunk or 3.3.x branch?
22:06 * phox would really like native RDMA again
22:07 a2_ phox, which patches?
22:07 phox a2_: RDMA
22:07 a2_ which patches are you referring to specifically?
22:07 phox there are RH* packages for a patched version
22:07 phox a2_: according to JoeJulian, kkeithley's patches
22:07 phox http://repos.fedorapeople.or​g/repos/kkeithle/glusterfs/
22:07 glusterbot <http://goo.gl/EyoCw> (at repos.fedorapeople.org)
22:08 JoeJulian which is according to kkeithley. a2, should be the ones that allow glusterd to report the correct port numbers instead of 65535.
22:11 phox JoeJulian: judging by the fact that those are 9 months old I'd guess movement is being slow on upstream pulling those in? =/
22:12 fidevo joined #gluster
22:12 JoeJulian Yeah, I kind-of expected 3.3.1 to be 3 months ago.
22:12 JoeJulian er, 3.3.2
22:13 JoeJulian Apparently focus has been on 3.4.
22:13 phox makes sense
22:13 phox is 3.3.2 kinda-out now?  or not so much?
22:13 JoeJulian not from my side of things.
22:13 bronaugh_ it's out.
22:13 phox yeah, from production
22:14 JoeJulian It's qa testing still, I think.
22:14 phox does 3.3.2 include any of the aforementioned, before I go digging?
22:14 JoeJulian I haven't had a chance to check.
22:14 phox k
22:14 bronaugh_ 3.3.2 QA release – (RPM – April, 2013)
22:14 bronaugh_ http://www.gluster.org/download/
22:14 glusterbot Title: Download | Gluster Community Website (at www.gluster.org)
22:16 phox yay RPMs
22:16 phox I'm sure Freud would have something to say about that
22:17 JoeJulian You can install rpms with alien
22:17 phox yeah
22:17 JoeJulian You can install centos or fedora pretty easily too. ;)
22:17 phox no.
22:18 phox then I have to put up with those distros :)
22:18 bronaugh_ ... pretty sure that's what we did last time.
22:18 phox I already have to put up with Debian
22:18 bronaugh_ because even Sid's gluster is crusty.
22:18 JoeJulian I'm just joking. Use whatever distro you like.
22:18 semiosis phox: what exactly are you looking for?
22:18 phox weird packages.  account?  container?
22:19 phox semiosis: gluster with working RDMA support
22:19 semiosis oh
22:19 JoeJulian I think semiosis built 3.2 and 3.4, didn't you?
22:19 bronaugh_ deb packages, or something which can be trivially alien'd
22:19 semiosis JoeJulian: ?
22:19 JoeJulian er, 3.3.2
22:19 phox hm, srpms for FC
22:19 semiosis JoeJulian: does that have the rdma patches phox needs?
22:20 phox semiosis: he didn't know
22:20 semiosis oh
22:20 JoeJulian They're potentially in 3.3.2
22:20 * phox goes looking for changelogs
22:20 JoeJulian I haven't looked though.
22:20 semiosis ha
22:20 Supermathie bronaugh_: building from source is actually quite painless: make clean; ./autogen.sh && ./configure --prefix=/usr/local/glusterfs && make -j32
22:20 semiosis ha @ potentially
22:21 Supermathie So we think that the current 3.3.2 tag has working rdma?
22:21 semiosis phox: what debian distro/version are you using?
22:21 phox Supermathie: I know.  And instaling?
22:21 bronaugh_ Supermathie: can't recall -- is there a 'debian' dir in the source tree?
22:21 phox semiosis: well, wheezy on the first box this is going on
22:21 phox other one is a bastard Squeeze w/ lots of Sid and Exp
22:21 Supermathie phox: make install :/
22:21 phox Supermathie: that's a mess.
22:21 phox :)
22:21 phox I like package management
22:21 semiosis phox: can you build your own debian packages?
22:22 Supermathie phox: with --prefix=/usr/local/glusterfs everything is contained at least - good for testing.
22:22 phox semiosis: yes, I can, but I don't have anything nice to say about the process.
22:22 semiosis bronaugh_: the debian/ dir isn't in the source tree itself but there are debian packages of glusterfs available for the motivated sysadmin
22:22 phox I'd probably rather just catalog what make install dumps where than deal with dpkg-deb crap
22:23 phox semiosis: ah, now that's a point
22:23 semiosis bronaugh_: you can find .debian.tar.gz files on packages.debian.org, packages.ubuntu.com, my ppa, ...
22:23 semiosis ,,(ppa)
22:23 phox 3.3.1 deb package should be very close to 3.3.2 I'd assume
22:23 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
22:23 phox so rpobably easy to unpack and recycle
22:23 JoeJulian Oh, what's that ruby tool I saw to convert from anything to any packaging system...
22:23 bronaugh_ phox: did you find changelogs?
22:23 semiosis JoeJulian: whack's fpm probably
22:23 JoeJulian Ah, right
22:23 semiosis it's quite popular, or so i hear
22:23 JoeJulian I had heard about it before I knew who Jordan was.
22:23 Supermathie Yeah I'm using the .debs from semiosis's PPA
22:24 Supermathie on one install
22:24 phox bronaugh_: gotta hit git directly for that, which I haven't done yet
22:24 semiosis i drank the ubuntu kool aid, so unless FPM can help me push source packages into launchpad ppas easier, it's not too helpful for me
22:24 vpshastry joined #gluster
22:25 phox where are these supposed packages, and is there are 3.3.2 one? :)
22:26 semiosis phox: what supposed packages?
22:26 phox semiosis: yours
22:26 phox appears to just be 3.3.1, bummer :P
22:26 Supermathie phox: look up 10 lines
22:26 Supermathie Yeah, 3.3.2 isn't out yet
22:26 semiosis phox: would ubuntu builds help you?
22:26 semiosis i can throw up a new ppa for 3.3.2 real quick if that would help
22:27 bronaugh_ phox: the appropriate patch is in there: https://github.com/gluster/g​lusterfs/commits/release-3.3
22:27 glusterbot <http://goo.gl/E4tPV> (at github.com)
22:27 bronaugh_ phox: Feb 05
22:27 phox bronaugh_: ok, excellent
22:27 phox semiosis: I'd assume they's interface with the system about the same way, they're just fuse + some scripts, which is generic AFAIK
22:28 phox dunno about sticking a PPA onto Debian but...
22:28 semiosis phox: https://lh5.googleusercontent.com/-f​SwoErTOhsY/UXqYITU135I/AAAAAAAAPJ8/M​TBeb9PSWuE/w497-h373/hypothesis.png
22:28 glusterbot <http://goo.gl/hYE2h> (at lh5.googleusercontent.com)
22:28 phox heh.
22:29 phox *chortle*
22:29 semiosis glusterfs is more than just "fuse + some scripts"
22:29 phox semiosis: yeah, but I mean insofar as what could be even vaguely distro-specific
22:29 semiosis however, if the dependencies in the ubuntu packages align with the available versions in debian, it will work
22:29 phox yeah, which might not fly :l
22:30 phox perhaps I should just suck it up and run a dirty 'make install', or package it
22:31 Supermathie phox: what's that tool the fakeroots a make install and records the results as a package...
22:31 semiosis phox: i think ubuntu precise packages might work on debian wheezy, probably not on squeeze though
22:31 phox Supermathie: I thought fakeroot dpkg-deb --build or something did approx that
22:31 semiosis libssl0.9.8 vs libssl1.0.0 iirc
22:31 phox been a while and the Debian documentation there is -horrible-
22:31 semiosis Supermathie: pbuilder
22:32 semiosis actually no
22:32 vex joined #gluster
22:38 Supermathie checkinstall
22:40 Supermathie phox: checkinstall
22:40 phox Supermathie: yeah looking already before you said my name
22:40 phox cool, thanks
22:40 phox seems like a simple thing for someone to design and a HUGE timesaver.
22:40 phox wheeeee.
22:41 Supermathie yeah when I first found that: OOOOHHHHH
22:42 * phox wonders how the separate RDMA package gets to join the party
22:42 phox also, annoying, there are ONLY RPMs and no source snapshots of the qc1 and qc2 releases
22:43 bronaugh_ just dump the debian crap into the source tree and dpkg-buildpackage it.
22:43 semiosis ~qa releases | phox
22:43 glusterbot phox: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
22:43 brian_ Jim: SUCESS!… finally got it to mount, though I had to use TCP… thanks again for your help…
22:44 brian_ cya'all
22:44 brian_ left #gluster
22:44 semiosis guess we forgot to tell brian_ about ,,(nfs)
22:44 glusterbot To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
22:44 semiosis maybe?
22:45 phox so now I wonder, any reason to not use qa2?  I'd asume this is feature frozen so it SHOUDL be "better"...
22:45 bronaugh_ excellent... kernel nfs server can DIAF
22:45 phox Linux NFS in general can DIAF
22:45 semiosis phox: use the latest 3.3.2, looks like qa2
22:45 phox semiosis: yeah, k
22:45 phox thems was my thinkin's.
22:54 meeew joined #gluster
22:59 lh joined #gluster
22:59 jag3773 joined #gluster
23:00 meeew i'm running into the problem that as soon as i bind socket/rdma to an internal network, the management cli fails to connect to cluster and tells me to check if the cluster is operational, the log files looks exactly the same, netstat only changes from 0.0.0.0 to the specific address; hosts resolves to that ip address (DNS does not, but commenting out hosts and using the DNS'd ip address doesn't work either), anybody got an idea?
23:01 nickw joined #gluster
23:03 mynameisbruce__ joined #gluster
23:06 MattRM joined #gluster
23:12 robos joined #gluster
23:17 robos joined #gluster
23:19 bronaugh_ meeew: what ver?
23:22 meeew 3.3
23:24 meeew 3.3.1-ubuntu1~precise9 to be precise, via the recommended ppa
23:25 phox that doesn't have RDMA
23:25 phox or shouldn't
23:25 meeew might be, not really the issue though :)
23:31 * phox nods
23:31 phox bbl
23:33 bit4man joined #gluster
23:35 plarsen_ joined #gluster
23:41 duerF joined #gluster
23:49 andrewjs1edge joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary