Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 ira joined #gluster
00:40 plarsen joined #gluster
00:45 LHinson joined #gluster
00:49 ir8 joined #gluster
01:01 topshare joined #gluster
01:07 topshare_ joined #gluster
01:14 vimal joined #gluster
01:17 agen7seven joined #gluster
01:28 topshare joined #gluster
01:38 haomaiwa_ joined #gluster
01:54 sas_ joined #gluster
01:58 harish joined #gluster
02:21 RameshN joined #gluster
02:24 jjahns hey if i want to take advantage of libgfapi with nova and libvirt, do i only take advantage of it if i setup cinder or will nova tell the environment to use gluster
02:24 pradeepto_ joined #gluster
02:24 pradeepto_ joined #gluster
02:25 bala joined #gluster
02:34 VerboEse joined #gluster
02:37 topshare joined #gluster
02:54 ir8 joined #gluster
03:21 Ark joined #gluster
03:52 glusterbot New news from newglusterbugs: [Bug 1132766] ubuntu ppa: 3.5 missing hooks and files for new geo-replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1132766>
03:54 shubhendu_ joined #gluster
03:54 itisravi joined #gluster
04:13 RameshN joined #gluster
04:14 ndarshan joined #gluster
04:17 hagarth joined #gluster
04:22 glusterbot New news from newglusterbugs: [Bug 991084] No way to start a failed brick when replaced the location with empty folder <https://bugzilla.redhat.com/show_bug.cgi?id=991084>
04:28 ndarshan polycom123
04:28 ndarshan oops !! sorry
04:30 DV__ joined #gluster
04:30 Rafi_kc joined #gluster
04:31 sputnik13 joined #gluster
04:32 anoopcs joined #gluster
04:37 prasanth_ joined #gluster
04:38 ppai joined #gluster
04:40 nishanth joined #gluster
04:45 nbalachandran joined #gluster
04:46 jiffin joined #gluster
04:46 shubhendu_ joined #gluster
04:49 kdhananjay joined #gluster
04:49 ramteid joined #gluster
04:50 meghanam joined #gluster
04:50 meghanam_ joined #gluster
04:54 atalur joined #gluster
05:02 haomai___ joined #gluster
05:07 spandit joined #gluster
05:16 sas_ joined #gluster
05:18 saurabh joined #gluster
05:19 Paul-C joined #gluster
05:19 Paul-C left #gluster
05:22 Ark joined #gluster
05:22 glusterbot New news from newglusterbugs: [Bug 1132496] Tests execution results in tests failure and system hang <https://bugzilla.redhat.co​m/show_bug.cgi?id=1132496> || [Bug 1132796] client3_3_readdir - crash on NULL local <https://bugzilla.redhat.co​m/show_bug.cgi?id=1132796>
05:24 nbalachandran_ joined #gluster
05:25 aravindavk joined #gluster
05:34 lalatenduM joined #gluster
05:35 lalatenduM JoeJulian, ping, regarding your feed back on bug triage, IMO it is an excellent point. wanted to talk to u abt it
05:36 aravindavk joined #gluster
05:37 JoeJulian sure
05:38 JoeJulian I'm here for a short while, while I reconfigure some networks. Then it's back to vacation. :D
05:39 JoeJulian lalatenduM: ^
05:39 lalatenduM JoeJulian, np, I have a small question
05:39 JoeJulian 42
05:40 JoeJulian (it's a small answer. let's see if it matches the question)
05:40 lalatenduM JoeJulian, the mail is just addressed to me, i think it was intentional
05:40 lalatenduM do you mind send sending it to all :)
05:41 lalatenduM bcz  I think this concern needs to be addressed
05:41 JoeJulian It was. You called for volunteers and I didn't want to be long-winded. I can, but it'll need to be a much longer response.
05:41 lalatenduM the same cause may be stopping other community members to do the triage
05:42 JoeJulian No, I think the majority are afraid they just don't know enough to be of any help.
05:42 lalatenduM JoeJulian, yeah thats fine, this is something need to be fixed
05:42 lalatenduM JoeJulian, I mean your concern
05:42 JoeJulian Anyway, sure. I'll be the guy. I do that from time-to-time. :D
05:43 lalatenduM JoeJulian, :) thanks
05:43 JoeJulian Eventually I won't be welcome in Bangalore, Westford, or the SF Bay area.
05:43 lalatenduM JoeJulian, thanks a lot for feedback , I kind of guess the concern , but you confirmed it
05:43 JoeJulian (just kidding)
05:44 lalatenduM JoeJulian, haha , seriously , I dont think so :)
05:44 lalatenduM yeah
05:45 lalatenduM JoeJulian, I will do my bits , will push to change the status-quo :)
05:45 JoeJulian cool
05:49 nshaikh joined #gluster
05:52 rastar joined #gluster
05:56 karnan joined #gluster
06:04 deepakcs joined #gluster
06:12 JoeJulian JustinClift: Hah! I blame lala for that one. :D
06:13 Paul-C joined #gluster
06:14 siXy joined #gluster
06:14 decimoe left #gluster
06:18 kumar joined #gluster
06:43 plarsen joined #gluster
06:49 sputnik13 joined #gluster
06:55 Paul-C left #gluster
06:56 ricky-ti1 joined #gluster
06:59 haomaiwang joined #gluster
07:00 ctria joined #gluster
07:04 JustinClift JoeJulian: Yeah.  I'ved several old gluster-devel addresses yesterday/today. ;)
07:04 JustinClift s/several/rejected several/
07:04 glusterbot What JustinClift meant to say was: JoeJulian: Yeah.  I'ved rejected several old gluster-devel addresses yesterday/today. ;)
07:05 JustinClift Hmmm
07:05 JustinClift abc
07:05 JustinClift s/b/d/g
07:05 glusterbot JustinClift: Error: u's/b/d/g abc' is not a valid regular expression.
07:05 JustinClift No /g
07:05 JustinClift ;)
07:05 JoeJulian getting closer to tracking down a huge ass memory leak in 3.4.5...
07:05 JustinClift Cool. :)
07:05 JoeJulian 19g in 10 minutes...
07:06 sputnik13 joined #gluster
07:06 JustinClift Ouch
07:06 * JustinClift feels a 3.4.6 coming on
07:06 JoeJulian me too
07:06 JoeJulian and I'm on vacation... :D
07:06 JustinClift Heh
07:08 sputnik13 joined #gluster
07:11 sputnik13 joined #gluster
07:16 sputnik13 joined #gluster
07:19 siXy joined #gluster
07:21 aravindavk joined #gluster
07:25 hagarth joined #gluster
07:33 keytab joined #gluster
07:37 prasanth_ joined #gluster
07:42 agen7seven joined #gluster
07:45 andreask joined #gluster
07:50 Pupeno joined #gluster
07:53 glusterbot New news from newglusterbugs: [Bug 1128820] Unable to ls -l NFS mount from OSX 10.9 client on pool created with stripe <https://bugzilla.redhat.co​m/show_bug.cgi?id=1128820>
07:54 nbalachandran joined #gluster
07:55 anoopcs joined #gluster
08:02 liquidat joined #gluster
08:17 siXy joined #gluster
08:21 dastar joined #gluster
08:24 nbalachandran joined #gluster
08:40 bala joined #gluster
08:47 roeme^scs joined #gluster
08:56 Thilam joined #gluster
09:02 dastar joined #gluster
09:03 Slashman joined #gluster
09:04 hagarth joined #gluster
09:15 nickmoeck joined #gluster
09:16 lalatenduM joined #gluster
09:19 andreask1 joined #gluster
09:20 dastar joined #gluster
09:42 andreask joined #gluster
09:57 Pupeno_ joined #gluster
10:19 diegows joined #gluster
10:21 surabhi joined #gluster
10:22 shilpa_ joined #gluster
10:28 sputnik13 joined #gluster
10:28 ira joined #gluster
10:28 ppai joined #gluster
10:32 surabhi joined #gluster
10:35 prasanth_ joined #gluster
10:42 shubhendu_ joined #gluster
10:43 agen7seven joined #gluster
10:45 shilpa_ joined #gluster
10:56 bala joined #gluster
10:58 sputnik13 joined #gluster
11:00 mojibake joined #gluster
11:10 todakure joined #gluster
11:15 Philambdo joined #gluster
11:17 andreask joined #gluster
11:30 deepakcs joined #gluster
11:33 ekuric joined #gluster
11:33 gmcwhistler joined #gluster
11:35 bene2 joined #gluster
11:37 kryl joined #gluster
11:37 kryl hi
11:37 glusterbot kryl: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:38 kryl I want to know if it's better to mount and use a big directory in one share of gluster ? or if it's better to split it into different little ones ? or is it the same ?
11:39 meghanam joined #gluster
11:39 meghanam_ joined #gluster
11:39 kryl does it make sense ?
11:39 chirino joined #gluster
11:43 andreask joined #gluster
11:44 hypnotortoise joined #gluster
11:49 hypnotortoise hi all. I have a replication issue with glusterfs v.3.5.2 Setup: 2 nodes in same subnet, each containing 1 brick. volume created with create homes2 replica 2 transport tcp host1:[brick1] host2:[brick2]. volume is working ok, until I create a big file on homes2 (via mount glusterfs) and restart one of both nodes
11:50 hypnotortoise when node2 comes back online, heal is started (seeing in glustershd.log) but doesn't finish
11:50 hypnotortoise [2014-08-22 11:31:46.043060] I [afr-self-heal-common.c:28​68:afr_log_self_heal_comp letion_status] 0-homes2-replicate-0:  foreground data self heal  is successfully c ompleted,  data self heal from homes2-client-0  to sinks  homes2-client-1, with 53 6870912 bytes on homes2-client-0, 194934784 bytes on homes2-client-1,  data - Pend ing matrix:  [ [ 0 39152 ] [ 1 1 ] ]  on <gfid:c9b20950-53a9-43f1-a30c-5981302ec4e 9>
11:51 hypnotortoise that's the last message I see regarding healing
11:52 hypnotortoise strangely enough, heal cli information seems to indicate that healing was successful
11:53 hypnotortoise [root@gfs02:...b/puppet/tmp/gluster] # gluster volume heal homes2 info healed Gathering list of healed entries on volume homes2 has been successful  Brick gfs01.sclocal:/srv/gluster/bricks/brick1/homes2 Number of entries: 1 at                    path on brick ----------------------------------- 2014-08-22 11:31:46 /foo/bigfile  Brick gfs02.sclocal:/srv/gluster/bricks/brick2/homes2 Number of entries: 1 at                    pat
11:53 glusterbot hypnotortoise: ---------------------------------'s karma is now -1
11:57 itisravi hypnotortoise: Does `gluster volume heal <volname> info` shows zero entries?, If yes, then there are no pending heals.
11:58 hypnotortoise yes it does
11:59 hypnotortoise but bigfile was not healed on node2
11:59 hypnotortoise it still contains same filesize, when I took the node down
12:05 ndevos kryl: it depends on how you use the directories and its contents, directory listings are expensive, fewer files in a directory make the listing more usable
12:06 ndevos kryl: however, if you know the location of the file (saved in a db or something), you dont need directory listings in the normal case, so if there are more files in a directory, it does not matter too much
12:07 kryl it appears that there is like 200GB of 6k files... theses files are continuously moving they are not static.
12:07 ndevos kryl: one of the most efficient patters seems to be creating new directories based on time, when the current directory is full, create the next one
12:07 kryl and there is like 8 nodes who are connected to the server shares
12:08 ndevos hmm, and those files are used on other systems that do the processing?
12:08 kryl yes
12:09 LebedevRI joined #gluster
12:09 ndevos well, a rename on glusterfs is inefficient... the files are distributed by hashing the filename, the outcome defines which brick should get the file
12:10 Pupeno joined #gluster
12:10 ndevos a rename keeps the file on the same brick (to reduce bandwidth between servers), but when hashing the new file, the hash/target brick migh be incorrect
12:11 ndevos that cause the client to do read a link-file (similar to symlink), and then it knows which brick contains the actual file
12:11 kryl well, to use one large brick or many little ones is not really different ?
12:12 ndevos in your case, larger and fewer bricks will probably more efficent - less likely that the rename introduces the link-file indirection
12:15 kryl ok
12:15 kryl what about the filesystem to use on the server
12:16 kryl /dev/xvdg1 on /data type xfs (rw,noatime,attr2,delaylog,nobarrie​r,logbufs=8,logbsize=256k,noquota)
12:17 kryl not sure if it will impact really
12:17 itisravi_ joined #gluster
12:21 Thilam|work joined #gluster
12:37 LHinson joined #gluster
12:40 LHinson1 joined #gluster
12:44 B21956 joined #gluster
12:51 simulx joined #gluster
12:57 bennyturns joined #gluster
13:02 theron joined #gluster
13:02 Ark joined #gluster
13:03 nbalachandran joined #gluster
13:09 stickyboy ,,ports
13:09 stickyboy ,ports
13:10 stickyboy ,iptables
13:10 stickyboy grrr
13:12 hypnotortoise hmm I think I found the error. is atime on the bricks required for replication of partially created files on a node?
13:12 kmai007 joined #gluster
13:15 kmai007 has anyone had experience with rebalance, how long it took, and if there were any roadblocks
13:16 kmai007 i have a 6-node setup now and ran the rebalance for distr-rep. yesterday
13:17 kmai007 this morning some servers say completed across the peers, and some say in progress
13:18 kmai007 http://fpaste.org/127699/71347214/
13:18 glusterbot Title: #127699 Fedora Project Pastebin (at fpaste.org)
13:18 edward1 joined #gluster
13:19 kmai007 my gluster version is 3.4.2-1
13:22 partner_ kmai007: it so much depends on the amount of data. i am believing myself i cannot ever complete a rebalance with the amount i have and even if i would it would take probably months..
13:23 partner_ its not exactly superfast and i guess there is nothing to help speeding it up either, probably by design (not to interfere with the real data usage)
13:23 partner_ i will try it out once i get my version updated to one that is no longer leaking memory for the rebalance
13:24 partner_ currently i can only run it for 2-3 days
13:24 kmai007 thanks partner_
13:25 partner_ don't have any "scientific" data on how long it might take but for what i've seen on our side - it will take a lot of time with lots of data
13:25 kmai007 i have about 800G
13:27 partner_ it'll probably finish soon for you then
13:28 partner_ i have roughly 15-25 TB on each server and i have dozen of them..
13:28 roeme^scs hypnotortoise: http://lists.nongnu.org/archive/html​/gluster-devel/2008-03/msg00149.html
13:28 glusterbot Title: Re: [Gluster-devel] noatime mount option on the client? (at lists.nongnu.org)
13:28 kmai007 if the rebalance, would it possible that 4 of the 6 be completed, and the other 2 in progress?
13:28 ndevos @ports
13:28 glusterbot ndevos: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
13:28 ndevos ~ports | stickyboy
13:28 glusterbot stickyboy: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
13:29 ndevos stickyboy: or, you can try ,,(ports)
13:29 glusterbot stickyboy: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
13:29 ndevos maybe there are more :)
13:31 partner_ kmai007: i guess thats possible if those two would happen to have more unbalanced files than the rest. i quickly ran a rebalance on a 4-brick system and it rebalanced one host longer than others
13:32 kmai007 thanks for sharing partner_
13:32 kmai007 i figured if it was trying to EVEN distribute
13:32 kmai007 then all the nodes should still be in progress
13:33 partner_ i don't know exactly how it works but if we assume each host checks if the files its supposed NOT to have there and then move them to proper bricks and once done its done. ie. other will take care of the "incoming" files
13:33 partner_ based on the hash
13:34 partner_ at least i get my bricks very unbalanced due to fact they are full and lots of files needs to be written to "wrong" locations
13:35 kmai007 understood
13:37 julim joined #gluster
13:48 rotbeard joined #gluster
13:56 LHinson joined #gluster
13:57 dastar joined #gluster
14:16 wushudoin joined #gluster
14:26 bennyturns joined #gluster
14:28 topshare joined #gluster
14:32 cmtime joined #gluster
14:40 liquidat joined #gluster
14:41 sputnik13 joined #gluster
14:57 topshare joined #gluster
15:05 lmickh joined #gluster
15:14 kmai007 can i only run 1 rebalance per volume? within a gluster pooL?
15:15 jobewan joined #gluster
15:24 harish joined #gluster
15:54 jobewan joined #gluster
15:57 glusterbot New news from newglusterbugs: [Bug 1133073] High memory usage by glusterfs processes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1133073>
16:14 nbalachandran joined #gluster
16:24 _Bryan_ joined #gluster
16:50 sputnik13 joined #gluster
16:52 screamingbanshee joined #gluster
17:06 siXy joined #gluster
17:29 LHinson joined #gluster
17:31 MacWinner joined #gluster
17:32 LHinson1 joined #gluster
17:35 _dist joined #gluster
17:37 jhc76 joined #gluster
17:37 bennyturns joined #gluster
17:38 bennyturns joined #gluster
17:39 hagarth joined #gluster
17:45 daMaestro joined #gluster
17:48 ramteid joined #gluster
18:05 talntid left #gluster
18:12 Pupeno joined #gluster
18:33 kumar joined #gluster
18:34 Humble joined #gluster
18:46 diegows joined #gluster
19:08 Pupeno_ joined #gluster
19:09 PeterA joined #gluster
19:10 PeterA any one has tried nfs-ganesha?
19:10 PeterA or any userspace nfs server to proxy out glusterfs ?
19:14 Jamoflaw does anyone know the recommended disk size to ram ratio? I'm guessing this is also dependent on the type of files being served. What kind of basic server spec do you guys use in production?
19:32 _dist joined #gluster
19:34 chirino joined #gluster
19:51 pdrakewe_ joined #gluster
19:57 JoeJulian kmai007: correct. Only 1 rebalance at a time.
20:04 kmai007 JoeJulian: hey man i have a success story to share
20:04 kmai007 i was able to complete my production migration on tuesday and all is well!!!!
20:17 zerick joined #gluster
20:48 zerick joined #gluster
22:06 todakure left #gluster
22:16 ir8 joined #gluster
23:27 jbrooks left #gluster
23:29 topshare joined #gluster
23:38 marmalodak joined #gluster
23:55 mjrosenb ugh... I cannot use the gluster cli because python is acting up.
23:55 mjrosenb and I don't know python well enough to fix the issue myself :-(
23:57 mjrosenb can I just disable that at runtime?
23:57 mjrosenb JoeJulian: you seem to know everything :-)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary