Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 JoeJulian ~pasteinfo | azalime
00:00 glusterbot azalime: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
00:01 T0aD joined #gluster
00:01 jbrooks joined #gluster
00:01 azalime
00:01 azalime Volume Name: websites
00:01 azalime Type: Replicate
00:01 azalime Volume ID: abc2d65c-d8e1-4a44-b18e-c56aa3b1715b
00:01 azalime Status: Started
00:01 azalime Number of Bricks: 1 x 2 = 2
00:01 azalime Transport-type: tcp
00:01 azalime Bricks:
00:01 JoeJulian @kick azalime
00:01 azalime was kicked by glusterbot: JoeJulian
00:02 azalime joined #gluster
00:03 azalime sorry made a mistake and pasted here instead of http://fpaste.org
00:03 glusterbot Title: New paste Fedora Project Pastebin (at fpaste.org)
00:03 JoeJulian It happens. No worries.
00:03 azalime http://ur1.ca/hx1tx
00:03 glusterbot Title: #123459 Fedora Project Pastebin (at ur1.ca)
00:04 azalime so websites do export through nfs but not karma
00:06 JoeJulian The auth-allow lines are redundant. It defaults to allowing all anyway. Same is true for nfs.export-volumes. You can safely reset each of those options.
00:06 JoeJulian Is it shared on *either* server?
00:07 lpabon joined #gluster
00:07 azalime what if i reset all the options on those volumes?
00:08 JoeJulian Could try, but I would be surprised if that solves it.
00:09 azalime http://ur1.ca/hx1yy
00:09 glusterbot Title: #123460 Fedora Project Pastebin (at ur1.ca)
00:10 JoeJulian Did you try mounting anyway?
00:10 azalime yes I get file not found
00:10 azalime showmount -e server ip shows only /websites
00:10 JoeJulian anything in /etc/exports?
00:11 azalime that file doesn't exist
00:11 JoeJulian check exports on 10.142... just to see if it's the same
00:12 JoeJulian what version of gluster are you running?
00:14 azalime Export list for 10.142.170.12:
00:14 azalime /websites *
00:15 azalime 3.5.1beta2
00:15 JoeJulian Do you have the volumes mounted, or can you bounce the nfs service?
00:16 azalime i have /websites mounted using glusterfs not nfs
00:17 JoeJulian Ok, cool. "pkill -f gluster/nfs" then "service glusterfs-server restart" (assuming ubuntu)
00:17 azalime how do i bounce nfs service?
00:20 sputnik13 joined #gluster
00:25 azalime i had to stop all volumes and restart to get that fixed, not sure what happened though
00:25 azalime thanks JoeJulian
00:26 JoeJulian wierd.
01:34 bennyturns joined #gluster
01:40 Paul-C left #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 sputnik13 joined #gluster
01:57 haomaiwa_ joined #gluster
01:58 lyang0 joined #gluster
02:20 haomaiwa_ joined #gluster
02:22 RameshN joined #gluster
02:25 bala joined #gluster
02:28 overclk joined #gluster
02:30 coredump joined #gluster
02:32 mshadle left #gluster
02:35 sputnik13 joined #gluster
02:40 gildub joined #gluster
02:57 coredump joined #gluster
03:07 dusmant joined #gluster
03:08 coredump joined #gluster
03:10 bharata-rao joined #gluster
03:36 kshlm joined #gluster
03:38 shubhendu_ joined #gluster
03:38 haomai___ joined #gluster
03:41 Peter1 joined #gluster
03:50 Humble joined #gluster
03:51 hchiramm_ joined #gluster
03:52 nbalachandran joined #gluster
03:54 nbalachandran joined #gluster
03:55 Peter1 when a replicated volume being access heavily over nfs, all other nfs client stalled….
03:55 Peter1 on 3.5.1
03:55 Peter1 is that a bug?
03:55 lalatenduM joined #gluster
04:00 Peter1 is ubuntu 3.5.2 out yet?
04:04 Rydekull joined #gluster
04:06 itisravi joined #gluster
04:09 vimal joined #gluster
04:12 sputnik13 joined #gluster
04:15 ricky-ti1 joined #gluster
04:23 harish_ joined #gluster
04:27 kanagaraj joined #gluster
04:28 Humble joined #gluster
04:29 hchiramm_ joined #gluster
04:33 anoopcs joined #gluster
04:36 Rafi_kc joined #gluster
04:46 ndarshan joined #gluster
04:49 nbalachandran joined #gluster
04:53 ppai joined #gluster
04:57 jiffin joined #gluster
04:58 hagarth joined #gluster
04:59 overclk joined #gluster
05:00 coredumb joined #gluster
05:00 dusmant joined #gluster
05:07 rastar joined #gluster
05:07 nshaikh joined #gluster
05:07 spandit joined #gluster
05:11 prasanth_ joined #gluster
05:11 sputnik13 joined #gluster
05:13 sputnik13 joined #gluster
05:14 Peter joined #gluster
05:14 XpineX joined #gluster
05:15 firemanxbr joined #gluster
05:15 kdhananjay joined #gluster
05:17 sahina joined #gluster
05:18 prasanth_ joined #gluster
05:22 Guest25711 how do i turn off self-heal check at glusterfs client?
05:25 azar joined #gluster
05:27 rastar joined #gluster
05:32 ramteid joined #gluster
05:39 bala joined #gluster
05:41 karnan joined #gluster
05:50 dusmant joined #gluster
05:50 sahina joined #gluster
06:00 atalur joined #gluster
06:08 overclk joined #gluster
06:09 ppai joined #gluster
06:09 raghu joined #gluster
06:12 bmikhael joined #gluster
06:15 sac`away` joined #gluster
06:16 bala1 joined #gluster
06:16 navid__ joined #gluster
06:16 itisravi_ joined #gluster
06:16 spandit_ joined #gluster
06:16 karnan_ joined #gluster
06:16 kaushal_ joined #gluster
06:16 darshan joined #gluster
06:16 rtalur_ joined #gluster
06:16 vshankar joined #gluster
06:17 kdhananjay1 joined #gluster
06:17 shubhendu__ joined #gluster
06:17 rtalur__ joined #gluster
06:17 spandit__ joined #gluster
06:17 sac`awa`` joined #gluster
06:17 karnan__ joined #gluster
06:17 anoopcs joined #gluster
06:17 overclk joined #gluster
06:17 bala joined #gluster
06:17 itsravi joined #gluster
06:18 ndarshan joined #gluster
06:18 nshaikh joined #gluster
06:20 sputnik13 joined #gluster
06:21 atalur joined #gluster
06:23 dusmant joined #gluster
06:23 ppai joined #gluster
06:24 kumar joined #gluster
06:32 sahina joined #gluster
06:45 kshlm joined #gluster
06:45 bmikhael joined #gluster
06:53 sputnik13 joined #gluster
06:57 calum_ joined #gluster
07:02 sputnik13 joined #gluster
07:05 LebedevRI joined #gluster
07:05 bala joined #gluster
07:06 ekuric joined #gluster
07:08 sahina joined #gluster
07:08 JoeJulian Guest25711: "gluster volume set help" look for self-heal
07:08 Guest25711 This is Peter...
07:09 Guest25711 i recall u mentioned we can turn the self-heal check off?
07:09 Guest25711 can we do a delay heal?
07:10 Guest25711 it's like we have a client that has 100s of concurrent process hiting a replicaed volume and it just hang the mount
07:10 Guest25711 it seems like the max number or concurrent write the volume can do is around 16
07:10 ctria joined #gluster
07:11 Guest25711 like 16 process writing a 1G file
07:11 keytab joined #gluster
07:12 ekuric joined #gluster
07:13 dusmant joined #gluster
07:18 itisravi joined #gluster
07:22 rtalur__ joined #gluster
07:24 JoeJulian The default background-self-heal count is 16. So yes, turning off self-heal at the client would prevent that. The self-heal daemon would still heal the volume though. just remember, you're turning off even the check for a stale file. Data loss is likely.
07:25 Guest25711 and i noticed a lot of open files on replicated volume stays open eventho the process is stopped
07:26 Guest25711 say i m running a multithreaded iozone test and the iozone file still marked as open eventhough iozone process already gone
07:26 Guest25711 so how often the self-heal daemon heal the volume?
07:27 Guest25711 i assume u mean the glustershd on the server?
07:29 Guest25711 can i increase the background-self-heal ?
07:33 Guest25711 and how do i set that on the client??
07:35 JoeJulian gluster volume set help
07:35 Guest25711 client??
07:35 Guest25711 glusterfs-client ?
07:35 JoeJulian It's a volume setting.
07:35 Guest25711 oo….so have to do that on server?
07:36 Guest25711 what if i mount with a vol file from client?
07:36 JoeJulian on geez...
07:36 JoeJulian Oh
07:36 Guest25711 ya
07:36 JoeJulian all bets are off then.
07:36 Guest25711 ?
07:37 nishanth joined #gluster
07:37 JoeJulian If you're writing your own volfiles, I have no idea what some of the changes will do.
07:37 Guest25711 hmm
07:37 JoeJulian We haven't really done that in about 4 years.
07:37 Guest25711 ya i have my own volfiles to use the cache on the client
07:37 Guest25711 O wow
07:38 JoeJulian What I would do, if I were trying to do what you're doing, is to create a scratch volume. Set the features you want to try, and see what it does to the vol file in /var/lib/glusterd/vols.
07:38 Guest25711 let me do that on my dev
07:41 partner joined #gluster
07:41 gothos joined #gluster
07:41 fsimonce joined #gluster
07:42 Guest25711 hmm
07:42 Guest25711 it updated the info
07:42 Guest25711 not the .vol
07:42 giannello joined #gluster
07:43 JoeJulian did you start the volume?
07:43 Guest25711 yes
07:43 JoeJulian huh, interesting
07:44 gothos Hello! I'm trying to use the quota command on a two node setup, but on one machine I get "Quota command failed". On both servers I got "failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running." in the log, but everything seems fine
07:44 gothos any idea what might cause it? I'm on a CentOS 6 with glusterfs 3.5.2-1
07:45 JoeJulian @ports
07:45 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
07:45 JoeJulian So I'm thinking iptables
07:48 R0ok_ joined #gluster
07:49 dusmant joined #gluster
07:56 ppai joined #gluster
07:57 Pupeno joined #gluster
07:57 simulx2 joined #gluster
07:57 atalur joined #gluster
07:58 Guest25711 updated background-self-heal-count to 64….still hanging w/ 16 process..
07:58 Guest25711 39: volume sas03-replicate-1
07:59 Guest25711 40:     type cluster/replicate
07:59 Guest25711 41:     option background-self-heal-count 64
07:59 Guest25711 42:     subvolumes sas03-client-2 sas03-client-3
07:59 Guest25711 43: end-volume
07:59 itisravi_ joined #gluster
07:59 churnd joined #gluster
08:00 ekuric joined #gluster
08:00 gothos Is there maybe some way to see what glusterfs is doing at the moment? cause all the gluster processes are running at 100% on one of my two nodes and the other node is just isdling
08:04 JoeJulian is one node the server and the other a printer?
08:04 JoeJulian that might explain it.
08:05 JoeJulian Why did this industry pick the word "node" to be their "smurf"?!?! Gah.
08:05 JoeJulian Anyway... I would check the logs. That would also seem logical if you're having a problem with connecting to one of your servers.
08:06 JoeJulian use netstat to check network connections
08:06 Guest25711 if we set cluster.self-heal-daemon off, does it means it is a pure distribute volume?
08:06 JoeJulian Not at all.
08:07 Guest25711 when would the replication happen?
08:08 Guest25711 if Index directory crawl and automatic healing of fileswill not be performed
08:08 DJClean joined #gluster
08:33 ninkotech__ joined #gluster
08:39 ron-slc joined #gluster
08:40 glusterbot New news from newglusterbugs: [Bug 1075417] Spelling mistakes and typos in the glusterfs source <https://bugzilla.redhat.com/show_bug.cgi?id=1075417>
08:51 caiozanolla_ joined #gluster
08:52 karnan__ joined #gluster
08:52 lyang01 joined #gluster
08:52 edong23_ joined #gluster
08:53 dusmant joined #gluster
08:53 nishanth joined #gluster
08:53 itisravi joined #gluster
08:53 jiqiren joined #gluster
08:53 Intensity joined #gluster
08:53 anotheral joined #gluster
08:53 dblack joined #gluster
08:54 tom[] joined #gluster
08:54 Alex joined #gluster
08:54 nshaikh joined #gluster
08:54 ghghz joined #gluster
08:54 Diddi joined #gluster
08:54 basso joined #gluster
08:54 huleboer joined #gluster
08:55 prasanth|offline joined #gluster
08:55 wgao joined #gluster
08:56 atrius` joined #gluster
08:56 ekuric joined #gluster
08:58 ppai joined #gluster
09:00 nbalachandran joined #gluster
09:06 vimal joined #gluster
09:10 glusterbot New news from newglusterbugs: [Bug 1127140] memory leak <https://bugzilla.redhat.com/show_bug.cgi?id=1127140>
09:17 rastar joined #gluster
09:19 atalur joined #gluster
09:20 atalur joined #gluster
09:25 deepakcs joined #gluster
09:26 bharata-rao joined #gluster
09:27 dusmant joined #gluster
09:28 jiqiren joined #gluster
09:28 sahina joined #gluster
09:35 rjoseph joined #gluster
09:39 dusmant joined #gluster
09:40 glusterbot New news from newglusterbugs: [Bug 1127148] Regression test failure while running bug-918437-sh-mtime.t <https://bugzilla.redhat.com/show_bug.cgi?id=1127148>
09:45 rjoseph joined #gluster
09:56 caiozanolla joined #gluster
10:01 rastar joined #gluster
10:05 andreask joined #gluster
10:11 spandit__ joined #gluster
10:14 sputnik13 joined #gluster
10:16 nishanth joined #gluster
10:16 karnan__ joined #gluster
10:17 sahina joined #gluster
10:24 qdk joined #gluster
10:24 Slashman joined #gluster
10:37 eryc joined #gluster
10:39 MattAtL joined #gluster
10:44 ricky-ticky joined #gluster
10:46 rjoseph joined #gluster
10:52 jiqiren joined #gluster
10:58 ppai joined #gluster
11:01 mbukatov joined #gluster
11:05 suliba_ joined #gluster
11:09 diegows joined #gluster
11:11 atalur joined #gluster
11:14 siel joined #gluster
11:24 ninkotech__ joined #gluster
11:24 ninkotech_ joined #gluster
11:31 ira joined #gluster
11:31 ira rhs-smb: Please use the same call info as our normal daily standup.  I forgot to include it in the invite.
11:32 ira My apologies, wrong place.
11:40 glusterbot New news from newglusterbugs: [Bug 1123768] mem_acct : Check return value of xlator_mem_acct_init() <https://bugzilla.redhat.com/show_bug.cgi?id=1123768>
11:50 [ilin] joined #gluster
11:51 [ilin] i have gluster with two servers runnin 3.4.1 and I want to add another server running 3.5.1, however when i do "gluster volume add-brick db replica 3 g3:/bricks/db" it fails
11:51 [ilin] i tried with all servers running the same version and it works ok
11:52 [ilin] also 3.5 is compatible with 3.4 according to the docs, so what am i missing?
12:03 kkeithley I'm pretty sure we say 3.4 and 3.5 are compatible between server and clients. I'm sure we don't support mixed server versions, if only because we don't test that. And while we have deliberately kept the "data path" compatible, I don't believe the "control path" is compatible between versions.
12:04 kkeithley If 3.4.1 works on your other two servers, you should use it on the third server too.
12:04 [ilin] kkeithley: hm.. ok, i guess i can use 3.4 on the new server as well... but how does the rolling upgrade 3.4>3.5 work then?
12:05 kkeithley Or upgrade all your servers to 3.5.x
12:05 kkeithley with downtime
12:05 [ilin] kkeithley: yes that is the recommended way, but there is option b - rolling upgrades with no downtime
12:07 kkeithley maybe I'm wrong. I didn't think we had a rolling upgrade from 3.4 to 3.5
12:07 kkeithley for servers
12:08 [ilin] http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5
12:08 glusterbot Title: Upgrade to 3.5 - GlusterDocumentation (at www.gluster.org)
12:08 kkeithley hmmm, okay.
12:08 kkeithley I was wrong
12:09 [ilin] so i guess i cant just replace servers with newer version, i will have to do upgrade
12:10 anoopcs joined #gluster
12:12 kkeithley I think you've found a corner case. You're adding a replica and upgrading.  I'd start by adding your third server using 3.4.1, then do a rolling upgrade to 3.5
12:14 [ilin] kkeithley: yeah, i am spinning the VMs now. I dont want to do rolling upgrade with just two servers so I will test, adding a third one doing upgrade on all 3 and removing one if everything is OK.
12:15 [ilin] but it would have been awesome to just add a new version server, replicate files there and get rid of the old servers
12:17 hagarth joined #gluster
12:25 bene2 joined #gluster
12:27 B21956 joined #gluster
12:33 [ilin] when upgrading 3.4>3.5 i only need to run the pre/post scripts if I use quota, correct?
12:37 kkeithley I believe that's correct, according to the recipe
12:44 hchiramm [ilin], https://github.com/gluster/glusterfs/blob/master/doc/upgrade/quota-upgrade-steps.md
12:44 glusterbot Title: glusterfs/quota-upgrade-steps.md at master · gluster/glusterfs · GitHub (at github.com)
12:44 hchiramm refer that as well
12:46 [ilin] hchiramm: yes i read that... but i do not use quota so i can skip these steps
12:46 eryc joined #gluster
12:46 eryc joined #gluster
12:47 hchiramm [ilin], "when upgrading 3.4>3.5 i only need to run the pre/post scripts if I use quota, correct?" -> Isnt the query u had ?
12:50 [ilin] hchiramm: yes i should have been clearer
12:51 [ilin] i do not use quota
12:51 hchiramm ok .. nw !
12:51 [ilin] sorry for the mixup
12:51 hchiramm np
12:59 julim joined #gluster
13:02 kkeithley or maybe just whatever is currently on download.gluster.org. Wheezy I think
13:02 kkeithley let's see
13:03 kkeithley yeah, just wheezy for 3.5.1
13:03 kkeithley oops, wrong window
13:04 chirino joined #gluster
13:11 dusmant joined #gluster
13:18 bennyturns joined #gluster
13:18 diegows good morning
13:18 diegows I have an issue with xattrs with the security prefix
13:18 diegows touch testfile && setfattr -n security.NTACL -v foo testfile
13:19 diegows that command works in all the servers involved (glusterfs bricks and clients) locally
13:19 diegows but doesn't work in the mounted volume
13:19 diegows using an xattr name with other prefix works perfectly
13:26 ninkotech joined #gluster
13:27 diegows ??
13:34 tdasilva joined #gluster
13:36 skippy joined #gluster
13:51 ghghz joined #gluster
13:52 theron joined #gluster
13:54 ekuric joined #gluster
13:56 bit4man joined #gluster
13:56 itisravi joined #gluster
13:59 deepakcs joined #gluster
14:07 cristov joined #gluster
14:11 mojibake joined #gluster
14:15 wushudoin joined #gluster
14:20 mojibake joined #gluster
14:23 ndk joined #gluster
14:24 sahina joined #gluster
14:31 mortuar joined #gluster
14:32 theron joined #gluster
14:42 ghenry joined #gluster
14:45 itisravi joined #gluster
14:49 jbrooks joined #gluster
14:51 deepakcs joined #gluster
14:58 bene2 joined #gluster
14:58 rotbeard joined #gluster
15:02 Eco_ joined #gluster
15:03 richvdh joined #gluster
15:03 JustinClift *** Upstream Weekly GlusterFS Community Meeting is on NOW in #gluster-meeting on irc.freenode.net ***
15:03 JustinClift *** Upstream Weekly GlusterFS Community Meeting is on NOW in #gluster-meeting on irc.freenode.net ***
15:05 richvdh guys, this must be a common question, but I'm failing to find any answers to it: is it possible to persuade gluster to only listen on one interface (ie, to bind to a specific IP address)?
15:06 hagarth joined #gluster
15:11 harish_ joined #gluster
15:15 ninthBit joined #gluster
15:16 bala joined #gluster
15:17 rwheeler joined #gluster
15:19 ninthBit What is the best practice for the brick mount point directory owner:group and permissions?
15:42 skippy i'd like to know the pros and cons of adding new bricks, versus extending LVM on extant bricks.  Assuming the same underlying hardware in both scenarios, which is generally preferred?
15:46 lmickh joined #gluster
15:51 Chr1s1an t
15:54 overclk joined #gluster
15:56 mbukatov joined #gluster
16:15 dtrainor joined #gluster
16:15 ninthBit i have a distributed replica volume.  4 disks 2 nodes.  replica flow server1.brick1->server2.brick1 server2.brick2->server1.brick2.  now, on server1.brick1 i have a file that is size 0 but the same file in the same path exists on server2.brick2 but has the contents of a file.  server1.brick1 and server2.brick1 report the 0 sized file in the heal status and has been there for over 16 hours.    the file on server2.brick2->server1.brick2 i
16:16 ninkotech joined #gluster
16:16 dtrainor joined #gluster
16:17 ninthBit we are having stability issues with the samba and gluster and working to replace the nodes but need to make sure the heal status is 0.  we think SQL 2014's backup stuff is wrecking things which when turned off we are not killing the NAS
16:17 ninthBit once i have this done i will be working to replicate the issues in my test environment with sql 2014 and report back....
16:20 ndevos richvdh: no, it is not possible at the moment, gluster listens on all interfaces
16:21 richvdh ndevos: that is more-or-less the conclusion I had come to, actually, but thanks for confirming it.
16:21 ndevos richvdh: these ,,(ports) are used, in case you want to firewall them
16:21 glusterbot richvdh: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
16:21 richvdh it's a bit of a shame, as it would suit us perfectly (we have separate internal and external interfaces)
16:21 richvdh ndevos: thanks!
16:27 ndevos richvdh: you can request that feature if you file a bug for it, but I'm not sure what it'll take to get implemented
16:27 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
16:28 richvdh fair enough. To be honest, we need to do some firewall work anyway, so I think we'll work around it there. I was just hoping for a quick fix ;)
16:30 dtrainor joined #gluster
16:37 ninthBit ok, i have had to *fix* this file by first copy the good copy off the volume.  then i deleted the file. it cleaned up the file from both bricks and i put the file back.  this also cleaned up the heal status on it. i have another file in the same status that i am gonig to do the same thing for
16:37 ninthBit the volume heal did nothing to resolve the issue by the way....
16:38 ninkotech joined #gluster
16:38 ninthBit i have a file that has been in this status fo over 3 days and it is only a few kb in size....
16:58 ninthBit JoeJulian: yesterday we were talking about the replace-brick and the different commands to use.  well, the documentation at https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_managing_volumes.md#migrating-volumes is using the start/wait/commit but the cached gluster-user post indicates this is depricated?  http://webcache.googleusercontent.com/search?q=cache:ady2teHLbTEJ:www.gluster.org/pipermail/glust
16:58 glusterbot Title: glusterfs/admin_managing_volumes.md at master · gluster/glusterfs · GitHub (at github.com)
17:02 Guest25711 i see horrible read performance on replicated volume....
17:02 Guest25711 left #gluster
17:02 Peter4 joined #gluster
17:03 Peter4 I see horrible read performance on replicated volume….
17:13 deric_skibotn joined #gluster
17:18 sputnik13 joined #gluster
17:20 zerick joined #gluster
17:20 ninthBit does volume info list the bricks in alpha order or in the order they were configured?  i would like to verify the replica sequence. how do i do this?
17:22 Peter4 ninthBit: you can read the /var/lib/glusterfs/ .vol to see how to translator configure
17:22 Peter4 i am seeing super slow ls  wondering why
17:22 andreask joined #gluster
17:24 luckyinva joined #gluster
17:27 ninthBit ok, i see the continets fo the vols/[volume]/info file. if this order of bricks is how gluster is using them i might have a mistake in the order of two bricks.  is it trivial to fix this with a distributed-replica volume?
17:27 ninthBit or should i shuffle this via using bricks?
17:28 Peter4 sure you can do that
17:41 jbrooks left #gluster
17:43 lmickh_ joined #gluster
17:48 luckyinva Im currently dealing with a Unable to get lock for UUID error.  Per my research a restart of gluster across all nodes should resolve this.  Currently I have attempted this fix 2 times and I am still faced with the same error.  Failed to aquire lock
17:49 luckyinva anyone here have any additional guidance / knowledge of this issue or resolution?
17:59 jbrooks joined #gluster
18:02 deric_skibotn left #gluster
18:02 ninthBit i think i understand the order of the bricks and how they make a replica-set.  what i have not figured out is how replica-sets work in the distributed-volume.  I understand files will be distributed between the replica-sets.  but how does this work in regards to the peers in the cluster.  does the replica-set brick order matter for managing work or data access to a specific peer?
18:04 ninthBit what i am wondering is would there be any difference between the following.  distributed replica setup.  replica 2 server1:/brick1 server2:/brick1 server1:/brick2 server2:/brick2  vs  replica 2 server1:/brick1 server2:/brick1 server2:/brick2 server1:/brick2  What i did was swap the order of the bricks for the replica-set two
18:06 and`_ joined #gluster
18:08 skippy ninthBit: I *think* the end result would be identical. you're putting the same bits on the same bricks.
18:09 skippy order of bricks within a replica set shouldn't matter: both replica pairs get the same replicated data.
18:09 skippy right?
18:09 skippy rather, both members of a replica pair
18:10 ninthBit i have been thinking that a two peer distributed replica each peer would be serving 1/2 of the distributed volume.
18:10 prasanth|offline joined #gluster
18:11 ninthBit then i was thinking that the brick order might hint on the peer that would be primary for a replica-set thinking that the first disk in the replica-set was the "primary" and the second was the "slave"
18:11 ninthBit i will try to find where i learned such information.....
18:11 hchiramm joined #gluster
18:11 ninthBit but, if gluster peers in the volume figure out on their own where files are pulled and collected then i can relax and continue. but right now i have my work stopped to double check this detail.
18:13 and` joined #gluster
18:13 skippy the clients and servers all use the DHT to figure out where to place files on bricks, based on the hash algorithm.
18:13 skippy I dont think any brick is "master" wihtin the replica set. they're identical.  Unless I'm very much mistaken.
18:16 ninthBit well, there is the time lag on replication and the file must be written to a peer that has one of the replica-set bricks attached.  i am trying to find if i can make predictions on the peer load for a two peer distributed-replica(2) volume.  assuming the files evenly split among the replica-set volumes.  would a single peer have to handle all load or how does gluster peers spread the load to the othe peers?  how do i know how to predict
18:17 ninthBit in the smallest factor a peer+brick will be acting as the master and will replicate the data to the other right?
18:17 ninthBit that is what i have thought i learned is that replciation is a late process operation not sync
18:18 prasanth|offline joined #gluster
18:18 richvdh joined #gluster
18:18 skippy are you using the native protocol, or NFS?
18:19 ninthBit skippy: i'll answer you with how it is setup.  samba->glusterfs-client fuse mount->gluster volume.  so i guess native protocol ?
18:19 sac`away joined #gluster
18:19 hchiramm joined #gluster
18:19 ninthBit skippy: another server will be using the glusterfs-client directly to the client.  the samba is using the samba server pointing to the glusterfs-client mounts.  not using smb in gluster
18:20 skippy it's my (naive) understanding that the native protocol (aka FUSE) performs all the hashing client-side for writing to the volume.  It should be performing near-simultaneous writes to all of the selected bricks for the operation.
18:22 ninthBit skippy: that is news to me. would you happen to know where i might be able to double check that online?  i would like to look into that more if the client really is sending the same bytes to all peers in the replica-set
18:22 skippy i may have learned that from in here.  but let me see if I can confirm that in any meaningful way
18:22 geewiz joined #gluster
18:23 ninthBit yeah, that would greatly help in server planning. then it would highlight that a two peer distributed-replica(2) setup is not the best .. but even more so if the client is sending to both servers.  when i thought the mirror happened in the background between the gluster peers.
18:24 skippy "Replication is synchronous"
18:26 skippy http://rhsummit.files.wordpress.com/2014/04/black_w_1650_red_hat_storage_server_administration_deep_dive1.pdf
18:26 skippy slide 32
18:26 ninthBit thanks i am looking at it now
18:27 skippy Files written synchronously to
18:27 skippy replica peers
18:27 skippy
18:27 skippy Files read synchronously, but
18:27 skippy ultimately serviced by the first
18:27 skippy responder
18:28 skippy geo replication is async
18:29 _Bryan_ joined #gluster
18:29 skippy ninthBit: how do you determine that a two peer distributed-replica(2) setup is not the best ?
18:30 skippy it doesn't buy you much, but it doens't really harm anything, does it?
18:30 skippy also, if I read this correctly, adding a second pair of bricks to a replica volume automatically makes it distributed replicated?
18:30 skippy https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
18:30 glusterbot Title: glusterfs/admin_setting_volumes.md at master · gluster/glusterfs · GitHub (at github.com)
18:31 ninthBit i mean it could be better.  not that it is wrong.  as we have found it works.  what i am thinking is to utilize network better perhaps two peers is not the best setup since both peers have to handle the load of the whole distributed volume
18:31 ninthBit that was the thought. would that be correct thinking?
18:31 skippy https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md#creating-distributed-replicated-volumes
18:31 glusterbot Title: glusterfs/admin_setting_volumes.md at master · gluster/glusterfs · GitHub (at github.com)
18:32 ninthBit if using the examples 4 peer example. two servers are the replica-set and the distrubted volume is between the two groups.
18:32 skippy i'm still learning this stuff, too, ninthBit !
18:34 ninthBit after seeing some the issues with our production server. i would like to see what i can contribute to the testing area of glusterfs.  it would be a win/win .. will have to see how i can budget the time. right now just a dream
18:39 ninthBit skippy: i think i found something to what you are talking about. but i wonder why it says glusterfs client "configured" to replicate.  i guess it means it was "configured" from the information about the volume? http://gluster.org/community/documentation/index.php/Gluster_3.1:_Understanding_Replication
18:39 glusterbot Title: Gluster 3.1: Understanding Replication - GlusterDocumentation (at gluster.org)
18:40 mortuar joined #gluster
18:41 ninthBit skippy: the synchronous write could lead to why i have seen odd file owner results between bricks in the same replica-set.
18:46 JoeJulian Peter4: "I see horrible read performance..." how does it compare with your engineered performance prediction?
18:47 cvdyoung joined #gluster
18:48 Peter4 it seems like the concurrent access we talked about last night was read
18:48 JoeJulian ninthBit: That would be awesome if you could help. You would be amazed at how small the entire open-source community is. It's fun to become a part of it, even for doing something as non-technical as bs'ing with other admins on IRC.
18:48 Peter4 i made some tunings on the vol file for glusterfs client
18:48 Peter4 now can do up to 90 iozone process
18:49 Peter4 doing write, read, random read and write
18:49 Peter4 but when the concurrent process kicked off the df hangs
18:49 JoeJulian ninthBit: when you created your volume, "gluster volume create myvol /replica N/ blah blah" that's what made it replicated. When you put "replica 2". If you had omitted that, or used replica 1, that would have created a distribute-only volume.
18:50 geewiz JoeJulian: Hi! You recommended yesterday doing a statedump of my client leaking memory. I've done that successfully. How can I now determine where it's leaking?
18:50 JoeJulian Peter4: Now this is starting to get interesting. :) Are you blogging any of this per chance?
18:50 Peter4 not yet
18:50 Peter4 where should i blog it?
18:50 JoeJulian geewiz: add your dump to bug 1127140 and let's see if it's the same leak.
18:50 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1127140 unspecified, unspecified, ---, gluster-bugs, NEW , memory leak
18:51 kumar joined #gluster
18:51 JoeJulian Peter4: Wordpress seems popular.
18:51 Peter4 :)
18:51 Peter4 will do
18:51 JoeJulian I use mezzanine on my own server.
18:51 ninthBit is it correct to think of a replica-set as a subvolume to the distributed volume?
18:51 Peter4 ic
18:52 * JoeJulian needs to work on that over his upcoming vacation...
18:52 JoeJulian ninthBit: precisely. replicas are literally subvolumes to distribute.
18:55 misuzu joined #gluster
18:55 misuzu left #gluster
18:56 geewiz JoeJulian: I've added my dump to bug 1127140.
18:56 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1127140 unspecified, unspecified, ---, gluster-bugs, NEW , memory leak
18:57 daxatlas joined #gluster
18:57 JoeJulian saw that, thanks. Pranith is working on this one so we won't see updates until daylight in Bangalore.
19:07 XpineX joined #gluster
19:09 rwheeler joined #gluster
19:16 ctria joined #gluster
19:16 Pupeno joined #gluster
19:19 theron joined #gluster
19:25 supersix joined #gluster
19:27 semiosis Peter4: uploading 3.5.2 packages to the ubuntu-glusterfs-3.5 ppa now, should be published soon.  i added glfsheal but haven't tested it yet.  please let me know if you run into any problems
19:28 Peter4 Cool!!! thanks!!
19:28 semiosis yw
19:29 diegows semiosis, that upgrade could help with this http://supercolony.gluster.org/pipermail/gluster-users/2014-August/018325.html :)
19:29 glusterbot Title: [Gluster-users] Setting security.NTACL xattrs fails (at supercolony.gluster.org)
19:30 calum_ joined #gluster
19:42 ninthBit i have executed my replace-brick and i see files in heal failed status.  no files in split brain. not sure on the direction to fix heal fail
19:43 ninthBit ok, i found something that said this could be a passing status
19:43 ninthBit i will wait and see if it heals ..... waiting for paint to dry...
19:45 diegows upgrading anyway
19:46 diegows semiosis, ping :)
19:48 dbruhn joined #gluster
19:59 supersix hi all, i have a question regarding the glusterd.vol file in 3.5.1 CentOS 6.5
20:00 supersix I'm setting up a two node replicated cluster and would like to know what the difference is between /etc/glusterfs/glusterd.vol and /var/lib/glusterfs/vols/<volume>/*.vol
20:00 supersix I want to put my configs in puppet (not using glusterfs-puppet), but since the files seem to be dynamically updated by  I'm not sure which are the right files to configure and maintain.
20:00 supersix any link to docs or example glusterd.vol files would be awesome thanks!
20:01 JoeJulian /etc/glusterfs/glusterd.vol is the management daemon configuration and is user-configurable. The stuff in /var/lib/glusterd is state data created by the management daemon.
20:02 JoeJulian The only one of those I would recommend interfering with via puppet would be /var/lib/glusterd/glusterd.info to ensure the uuid remains the same after being recreated.
20:03 ninthBit perhaps this is what i am seeing in my healfailed?  https://bugzilla.redhat.com/show_bug.cgi?id=864963
20:03 glusterbot Bug 864963: low, medium, ---, vsomyaju, POST , Heal-failed and Split-brain messages are not cleared after resolution of issue
20:04 JoeJulian diegows: Did you file a bug report for that?
20:04 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:06 theron joined #gluster
20:06 ninthBit if i am how do i verify the healfailed files have actually been resolved?  manually go into each brick and verify the file contents exist?
20:06 JoeJulian sure. If they're not in use, you can check hashes.
20:10 diegows JoeJulian, no... looks like a bug?
20:11 JoeJulian Oh, I don't know. You were asking semiosis if the release fixed something, then referred to an email.
20:15 ndk joined #gluster
20:22 coredump joined #gluster
20:31 luckyinva joined #gluster
20:35 qdk joined #gluster
20:38 sputnik13 joined #gluster
20:39 sputnik13 joined #gluster
20:42 glusterbot New news from newglusterbugs: [Bug 1010068] enhancement: Add --wait switch to cause glusterd to stay in the foreground until child services are started <https://bugzilla.redhat.com/show_bug.cgi?id=1010068>
20:44 theron joined #gluster
20:48 semiosis diegows: pong
20:49 diegows semiosis, any idea about this ? http://supercolony.gluster.org/pipermail/gluster-users/2014-August/018325.html
20:49 glusterbot Title: [Gluster-users] Setting security.NTACL xattrs fails (at supercolony.gluster.org)
20:50 diegows any hope that 3.5.2 could fix it :)
20:50 semiosis diegows: what filesystem are you using for your bricks?
20:50 diegows xfs
20:51 semiosis sorry no idea about that
20:52 diegows looks like I should file a bug report :P
20:52 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
21:13 theron_ joined #gluster
21:14 skippy left #gluster
21:18 plarsen joined #gluster
21:29 bennyturns joined #gluster
21:58 [ilin] left #gluster
22:03 y4m4 JustinClift:  you there?
22:07 MattAtL left #gluster
22:13 glusterbot New news from newglusterbugs: [Bug 1127457] Setting security.* xattrs fails <https://bugzilla.redhat.com/show_bug.cgi?id=1127457>
22:16 luckyinva joined #gluster
22:28 ws2k3 hello when i would use the glusterfs client is that only to access a server or does a glusterfs client have a local copy of the data ?
22:50 bala joined #gluster
23:03 semiosis ws2k3: the data lives on the servers (in bricks) but a client is required to access the data, for reading or writing.  all access goes through the client
23:05 supersix joined #gluster
23:05 supersix hi, anyone know where i can find docs on the cli
23:05 supersix specifically, how to get/show the value of an option
23:05 semiosis ,,(options)
23:05 glusterbot See config options and their defaults with 'gluster volume set help' and see also this page about undocumented options: http://goo.gl/mIAe4E
23:06 semiosis supersix: ^
23:06 semiosis you can see the current value of an option, if it has been modified, with 'gluster volume info'
23:06 semiosis @forget options
23:06 glusterbot semiosis: The operation succeeded.
23:06 supersix thanks!
23:06 semiosis @learn options as See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
23:07 glusterbot semiosis: The operation succeeded.
23:07 semiosis @options
23:07 glusterbot semiosis: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
23:07 semiosis supersix: yw
23:07 supersix when you change options will gluster update the /var/lib/gluster/* files accordingly
23:07 semiosis yes, and also reconfigure clients dynamically
23:09 supersix excellent, just tested in my dev and it worked out as explained
23:12 semiosis great
23:12 supersix last question if you have time, how can i change all log files locations to a new path
23:12 supersix centos 6.5 gluster 3.5.1
23:12 semiosis no idea.  maybe make a symlink from /var/log/gluster?
23:12 supersix i changed primary log files via /etc/sysconfig/$program but not sure how to change the rest
23:12 supersix ok thanks
23:13 supersix later, thanks again
23:17 Peter4 help! just upgraded to 3.5.2 and an gluster node can not start up!!
23:21 Peter4 E [store.c:408:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/vols/sas01/info, returned error: (No such file or directory)
23:21 coredump joined #gluster
23:24 T0aD joined #gluster
23:31 ccha2 joined #gluster
23:32 Peter4 fixed….some /var/lib/glusterd/vols/ gone after upgrade and restart!!!!
23:32 Peter4 bugs?
23:41 gildub joined #gluster
23:42 luckyinva joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary