Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:02 cjhanks joined #gluster
01:20 plarsen joined #gluster
01:23 tg2 JoeJulian - what setting can I use client-side to limit gluster native client memory usage?
01:23 tg2 its using like 10/12GB of my memory
01:24 tg2 using the echo > drop cache doesn't reduce it
01:33 tg2 nvm gluster was fine, was kernel misbehaving
01:39 B21956 joined #gluster
01:56 haomaiwa_ joined #gluster
01:57 haomai___ joined #gluster
02:15 sputnik13 joined #gluster
02:19 lyang0 joined #gluster
02:22 karnan joined #gluster
02:28 bharata-rao joined #gluster
02:40 firemanxbr joined #gluster
02:47 haomaiwang joined #gluster
03:02 haomai___ joined #gluster
03:08 Dillon joined #gluster
03:08 Dillon left #gluster
03:25 gildub joined #gluster
03:26 sjm joined #gluster
03:29 haomaiwang joined #gluster
03:31 kdhananjay joined #gluster
03:38 haomai___ joined #gluster
03:40 haomaiwa_ joined #gluster
03:41 mkzero joined #gluster
03:44 saurabh joined #gluster
03:46 spandit joined #gluster
03:57 haomai___ joined #gluster
03:58 cjhanks joined #gluster
04:06 prasanth_ joined #gluster
04:06 haomaiwa_ joined #gluster
04:08 sjm left #gluster
04:15 XpineX joined #gluster
04:22 haomaiw__ joined #gluster
04:23 haomaiw__ joined #gluster
04:25 an joined #gluster
04:26 haomai___ joined #gluster
04:33 nishanth joined #gluster
04:33 anoopcs joined #gluster
04:35 meghanam joined #gluster
04:42 kdhananjay joined #gluster
04:45 jiffin joined #gluster
04:49 dusmant joined #gluster
04:50 anoopcs joined #gluster
04:50 sputnik13 joined #gluster
04:53 ramteid joined #gluster
04:54 anoopcs joined #gluster
04:55 Rafi_kc joined #gluster
04:55 psharma joined #gluster
04:57 anoopcs joined #gluster
05:05 anoopcs joined #gluster
05:07 anoopcs joined #gluster
05:11 anoopcs joined #gluster
05:11 anoopcs joined #gluster
05:30 firemanxbr joined #gluster
05:35 kdhananjay joined #gluster
05:36 hchiramm joined #gluster
05:38 haomaiwa_ joined #gluster
05:53 haomaiw__ joined #gluster
05:59 aravindavk joined #gluster
06:00 XpineX_ joined #gluster
06:00 dusmant joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 cjhanks joined #gluster
06:08 Philambdo joined #gluster
06:13 haomaiwang joined #gluster
06:19 kdhananjay joined #gluster
06:20 gEEbusT joined #gluster
06:23 sputnik13 joined #gluster
06:23 haomaiw__ joined #gluster
06:24 shylesh__ joined #gluster
06:36 meghanam joined #gluster
06:36 nishanth joined #gluster
06:48 ricky-ti1 joined #gluster
06:56 ctria joined #gluster
06:58 psharma joined #gluster
07:03 sputnik13 joined #gluster
07:06 raghu joined #gluster
07:11 nbalachandran joined #gluster
07:13 aravindavk joined #gluster
07:14 TvL2386 joined #gluster
07:19 Humble joined #gluster
07:32 keytab joined #gluster
07:32 fsimonce joined #gluster
07:44 itisravi joined #gluster
07:50 nbalachandran joined #gluster
07:54 aravindavk joined #gluster
07:54 meghanam joined #gluster
07:57 liquidat joined #gluster
08:01 calum_ joined #gluster
08:05 ekuric joined #gluster
08:13 meghanam joined #gluster
08:17 haomaiwa_ joined #gluster
08:26 an joined #gluster
08:31 haomaiw__ joined #gluster
08:35 richvdh joined #gluster
08:37 pdrakeweb joined #gluster
08:38 jiqiren joined #gluster
08:39 mibby joined #gluster
08:43 shylesh__ joined #gluster
08:43 tryggvil joined #gluster
08:45 al joined #gluster
08:45 glusterbot New news from newglusterbugs: [Bug 1123768] mem_acct : Check return value of xlator_mem_acct_init() <https://bugzilla.redhat.com/show_bug.cgi?id=1123768>
08:46 haomaiwa_ joined #gluster
08:49 haomai___ joined #gluster
08:53 tryggvil joined #gluster
08:58 stickyboy I'm in the matrix...
08:58 stickyboy Just seen some super inception stuff in glustershd.log
09:00 vimal joined #gluster
09:00 haomaiwa_ joined #gluster
09:14 yosafbridge joined #gluster
09:17 Norky joined #gluster
09:17 haomaiw__ joined #gluster
09:29 ekuric joined #gluster
09:39 tryggvil joined #gluster
09:47 haomaiwa_ joined #gluster
09:52 LebedevRI joined #gluster
10:03 LebedevRI joined #gluster
10:15 glusterbot New news from newglusterbugs: [Bug 1005786] NFSv4 support <https://bugzilla.redhat.com/show_bug.cgi?id=1005786>
10:17 stickyboy Ok, just posted to the mailing list about my weird split-brain issue... I think it's split brain, but not sure.
10:18 keytab joined #gluster
10:21 haomaiwa_ joined #gluster
10:21 bgupta joined #gluster
10:23 stickyboy Say I have a directory I don't want.  And it's split brain.  Can I just kill the self-heal daemon, delete it on all replica bricks, then start glustershd again?
10:36 haomaiwa_ joined #gluster
10:38 LebedevRI joined #gluster
10:45 mbukatov joined #gluster
10:45 glusterbot New news from newglusterbugs: [Bug 1123816] [barrier] Barrier error messages needs betterment <https://bugzilla.redhat.com/show_bug.cgi?id=1123816>
10:47 haomaiwa_ joined #gluster
10:52 LebedevRI joined #gluster
10:55 an joined #gluster
10:58 Guest45045 joined #gluster
11:00 Slashman joined #gluster
11:07 haomaiwang joined #gluster
11:08 LebedevRI joined #gluster
11:16 kdhananjay joined #gluster
11:20 social joined #gluster
11:25 edward1 joined #gluster
11:37 qdk joined #gluster
11:40 meghanam joined #gluster
11:43 siel joined #gluster
12:04 Guest45045 joined #gluster
12:06 diegows joined #gluster
12:15 prasanth_ joined #gluster
12:43 sjm joined #gluster
12:44 DV joined #gluster
12:48 siXy joined #gluster
12:57 shylesh__ joined #gluster
12:59 ctria joined #gluster
13:03 bala joined #gluster
13:05 Humble joined #gluster
13:07 tdasilva joined #gluster
13:11 siel joined #gluster
13:15 julim joined #gluster
13:27 _Bryan_ joined #gluster
13:33 ramteid joined #gluster
13:37 nueces joined #gluster
13:37 ndk joined #gluster
13:41 chirino joined #gluster
13:48 bennyturns joined #gluster
13:50 Humble joined #gluster
13:55 julim joined #gluster
13:56 tdasilva joined #gluster
13:57 plarsen joined #gluster
14:02 plarsen joined #gluster
14:04 B21956 joined #gluster
14:09 caiozanolla JoeJulian, i see I have the option to just let the fs self heal, it will take lots of time, you told me network speed will increase as soon as the healing of files already present on the impaired node is complete and files absent on the impaired node are copied. ok. im replicating over amazon network so, do I have the option of snapshoting the bricks and mounting them on the 2nd node to avoid network replication? is this a feasable solution?
14:10 anoopcs joined #gluster
14:11 Humble joined #gluster
14:13 caiozanolla Joejulian, the idea is to snapshot all bricks on the 1st node, and replace all bricks on the 2nd node, is this even remotelly possible?
14:15 anoopcs left #gluster
14:17 georgem2 joined #gluster
14:19 georgem2 JoeJulian: as promissed, these are the instructions I followed to mount a Gluster volume sitting behind NAT-ed IP addresses: http://paste.openstack.org/show/LuCfJM5RXe2AAFxBSMtj/
14:19 glusterbot Title: Paste #LuCfJM5RXe2AAFxBSMtj | LodgeIt! (at paste.openstack.org)
14:20 caiozanolla JoeJulian, depending on the ammount of post operations on the files, attributes, and etc, it would be so much faster. I can snapshot all 7 1TB bricks in one or 2 days, transfering volumes to the other zone is also very fast if done by aws systems.
14:21 jobewan joined #gluster
14:25 brettnem joined #gluster
14:30 wushudoin joined #gluster
14:37 DV joined #gluster
14:45 aravindavk joined #gluster
15:04 anoopcs1 joined #gluster
15:13 xleo joined #gluster
15:13 xleo_ joined #gluster
15:13 xleo_ left #gluster
15:27 jbrooks joined #gluster
15:35 lmickh joined #gluster
15:38 an joined #gluster
15:41 Peter2 joined #gluster
15:48 giannello joined #gluster
15:48 rwheeler joined #gluster
15:51 tdasilva joined #gluster
16:02 daMaestro joined #gluster
16:22 dtrainor_ joined #gluster
16:26 MacWinner joined #gluster
16:27 LebedevRI joined #gluster
16:32 chirino joined #gluster
16:41 diegows joined #gluster
16:41 an joined #gluster
16:47 Peter2 good morning
16:47 Peter2 http://pastie.org/9427155
16:47 glusterbot Title: #9427155 - Pastie (at pastie.org)
16:47 Peter2 what does 0-sas02-marker: dict is null means?
16:49 bennyturns Peter2, its prolly becasue the get xattr failed
16:49 Peter2 O....
17:10 sputnik13 joined #gluster
17:11 chirino joined #gluster
17:20 JoeJulian dict is null because it wasn't defined. It wasn't defined as a result of GF_PROTOCOL_DICT_SERIALIZE failing in server-rpc-fops.c:787:server_getxattr_cbk  which causes the error reported in 796.
17:20 chirino joined #gluster
17:37 MacWinner joined #gluster
17:47 glusterbot New news from newglusterbugs: [Bug 1123950] Rename of a file from 2 clients racing and resulting in an error on both clients <https://bugzilla.redhat.com/show_bug.cgi?id=1123950>
17:52 kumar joined #gluster
17:55 ndk joined #gluster
18:01 georgem2 JoeJulian: as promissed, these are the instructions I followed to mount a Gluster volume sitting behind NAT-ed IP addresses: http://paste.openstack.org/show/LuCfJM5RXe2AAFxBSMtj/
18:01 glusterbot Title: Paste #LuCfJM5RXe2AAFxBSMtj | LodgeIt! (at paste.openstack.org)
18:02 JoeJulian Thanks georgem2
18:02 JoeJulian I'll see about getting that re-posted at gluster.org
18:06 doo joined #gluster
18:07 ricky-ticky joined #gluster
18:08 systemonkey joined #gluster
18:25 shubhendu joined #gluster
18:26 LebedevRI joined #gluster
18:36 edward1 joined #gluster
18:39 diegows joined #gluster
19:07 lmickh joined #gluster
19:13 mortuar joined #gluster
19:29 clyons joined #gluster
19:34 daMaestro joined #gluster
19:34 daMaestro joined #gluster
19:36 ghenry joined #gluster
19:51 Peter3 joined #gluster
19:52 Peter3 i keep having this directory showing wrong quota
19:52 Peter3 how can i figure out what file has wrong quota?
20:00 firemanxbr joined #gluster
20:01 JoeJulian Hmm, good question.
20:02 JoeJulian I would start by looking at extended attributes.
20:02 Peter3 how?
20:02 JoeJulian I also wonder if there's any difference between a sparse file and a not-sparse one wrt quota.
20:02 Peter3 totally
20:03 Peter3 i am using NFS with quota on replication and notice nfs seems part of the healing
20:03 JoeJulian @extended attributes
20:03 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
20:03 Peter3 so when healing failed, i see error on nfs log
20:03 Peter3 and i wonder if heal failed affected quota as well
20:03 JoeJulian Yeah, I started looking in to the error you posted and commenting, then I realized you'd left.
20:04 JoeJulian ... so I went back to work... :)
20:04 Peter3 oic sorry
20:04 JoeJulian No biggie
20:05 Peter3 what do u think ?
20:05 JoeJulian I probably should have been working anyway.
20:05 Peter3 i m tempting to get rid of NFS
20:05 Peter3 build a nfs gateway and mount gfs from there
20:05 JoeJulian Can you try turning off io-cache and read-ahead.
20:05 Peter3 how?
20:16 daMaestro joined #gluster
20:18 bennyturns Peter3, gluster v set $volname performance.readahead off
20:19 bennyturns Peter3, I may be off, you can see a list of all the volume setting with gluster v set help
20:19 JoeJulian btw... that's kind-of an rtfm question.
20:19 Peter3 let me try
20:19 bennyturns Option: performance.read-ahead
20:23 JoeJulian Those were the only translators that are on the server side that set op_error to -1. +1 is EPERM, but maybe there's something there. Unless you're using a non-standard filesystem, I have to believe the permission error is coming from inside a translator.
20:24 Peter3 i m using xfs
20:42 _dist joined #gluster
20:47 coredump joined #gluster
20:52 Lyfe left #gluster
21:01 sputnik13 joined #gluster
21:17 bennyturns joined #gluster
21:30 MacWinner joined #gluster
21:37 caiozanolla joined #gluster
21:47 stickyboy JoeJulian: I'm just about to clear some errant split brain directory from a brick (non-existent on the replica).
21:47 stickyboy JoeJulian: Should I kill the self-heal daemon while this is in progress?
21:48 stickyboy Actually, reading your 3.3 blog post it seems you stat'd on the FUSE mount afterwards, implying you did not kill glustershd.
21:49 stickyboy Great. :D
21:49 stickyboy http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
21:49 glusterbot Title: Fixing split-brain with GlusterFS 3.3 (at joejulian.name)
21:58 dtrainor_ joined #gluster
22:16 firemanxbr joined #gluster
22:29 caiozanolla joined #gluster
22:30 coredump joined #gluster
22:32 rotbeard joined #gluster
22:48 glusterbot New news from newglusterbugs: [Bug 1124066] incorrect run directory for samba vfs EPEL RPMS <https://bugzilla.redhat.com/show_bug.cgi?id=1124066>
23:24 xoritor joined #gluster
23:25 xoritor JoeJulian, thanks for your help the other day
23:25 dtrainor_ joined #gluster
23:26 xoritor anyone here using the bd-xlator with libvirt+qemu
23:26 cfeller joined #gluster
23:29 xoritor by the way if you remove the "file" from the glusterfs mount it removes the logical volume as well
23:30 xoritor the only way i have found to access the VM on the lvm backed (bd-xlator) device is via the fuse mount :-(
23:33 xoritor i have tried a few options
23:36 dtrainor_ joined #gluster
23:41 doo joined #gluster
23:57 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary