Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 zerick joined #gluster
00:23 bene joined #gluster
00:45 sjm left #gluster
00:46 bala joined #gluster
01:11 plarsen joined #gluster
01:29 bala joined #gluster
01:31 fignew joined #gluster
01:31 fignew joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 harish_ joined #gluster
01:50 haomaiwa_ joined #gluster
01:51 haomaiwa_ joined #gluster
01:58 haomaiw__ joined #gluster
02:00 overclk joined #gluster
02:12 _Bryan_ joined #gluster
02:21 hagarth1 joined #gluster
02:34 sputnik13 joined #gluster
02:38 shubhendu__ joined #gluster
02:39 haomaiwa_ joined #gluster
02:39 bharata-rao joined #gluster
02:44 coredump joined #gluster
02:59 haomaiw__ joined #gluster
03:24 haomaiwa_ joined #gluster
03:24 haomaiw__ joined #gluster
03:31 shubhendu__ joined #gluster
03:46 nbalachandran joined #gluster
03:47 Humble joined #gluster
03:47 hchiramm_ joined #gluster
03:53 bmikhael joined #gluster
03:53 bmikhael is there any tutorials on how to write translators
03:54 JoeJulian check Jeff Darcy's former project site: hekafs.org
03:55 JoeJulian There's also a sample rot13 translator in the source tree.
03:56 JoeJulian @lucky translator 101
03:56 glusterbot JoeJulian: http://www.101translations.com/
03:56 JoeJulian @meh
03:56 glusterbot JoeJulian: I'm not happy about it either
03:57 JoeJulian And http://gluster.org/community/documentation/index​.php/Arch/A_Newbie's_Guide_to_Gluster_Internals
03:57 glusterbot Title: Arch/A Newbie's Guide to Gluster Internals - GlusterDocumentation (at gluster.org)
03:59 itisravi joined #gluster
04:03 dusmant joined #gluster
04:05 bmikhael joined #gluster
04:15 jiku joined #gluster
04:16 jiku hi there..
04:17 jiku we were using gluster 3.3 in a 2 node setup..
04:17 jiku both acts as gluster server and client together..
04:17 jiku last month we upgraded to 3.5 gluster from official centos repo..
04:17 jiku from then, we are facing performance issues in the gluster..
04:17 jiku server goes to high load at times.. at times, files not in sync and inaccessible..
04:18 JoeJulian My first guess would be firewall
04:18 JoeJulian @ports
04:18 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
04:19 jiku JoeJulian, there is firewall running on both servers.. it accepts all traffic between the both nodes..
04:19 jiku do you think i should disable it and monitor ?
04:20 nishanth joined #gluster
04:21 spandit joined #gluster
04:22 nshaikh joined #gluster
04:22 JoeJulian Would seem like a logical test.
04:22 shubhendu__ joined #gluster
04:23 bmikhael joined #gluster
04:26 bmikhael i've followed the tutorial at hekafs.org but i was not able to get it to work
04:26 bmikhael as i've found that the make file in the current version of gluster totally different from the one at hefs.org
04:27 jiku JoeJulian, i have disabled firewalls now..
04:27 bmikhael hekafs.org *
04:27 jiku JoeJulian, monitoring the performance now..
04:29 jiku JoeJulian, have a look at the logs when trying to mount the gluster..
04:29 jiku http://pastebin.com/aLkHXRvr
04:29 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
04:31 jiku http://fpaste.org/124825/14078178/
04:31 glusterbot Title: #124825 Fedora Project Pastebin (at fpaste.org)
04:31 jiku the UNLINK statements are coming even now..
04:32 JoeJulian stop deleting files that don't exist?
04:33 JoeJulian wow, yuck. That application actually downloads the captcha to disk for each iteration?
04:33 JoeJulian That's not going to scale well.
04:36 ricky-ti1 joined #gluster
04:37 Rafi_kc joined #gluster
04:37 jiku JoeJulian, :) not for each iteration actually..
04:37 anoopcs joined #gluster
04:37 jiku whenever the page with captcha is called.. and looks like there is  cronjob which is deleting captcha images..
04:37 jiku no clue what devleoper is doing htere.
04:37 anoopcs joined #gluster
04:38 jiku JoeJulian, but these issues never occured in the older version.. wondering why
04:49 JoeJulian More logging added maybe?
04:49 JoeJulian one sure-fire way to tell...
04:50 kdhananjay joined #gluster
04:52 ramteid joined #gluster
04:53 aravindavk joined #gluster
04:53 ndarshan joined #gluster
04:54 JoeJulian Nope, that was logged back in 3.3 as well.
04:54 JoeJulian Perhaps it's not healed? Maybe do a heal...all
04:54 JoeJulian jiku: ^
04:55 jiku hmm.
04:56 atalur joined #gluster
04:57 atinmu joined #gluster
04:57 bala joined #gluster
04:58 gildub joined #gluster
05:00 jiku JoeJulian, followed this http://gluster.org/community/documentation/index.p​hp/Gluster_3.2:_Triggering_Self-Heal_on_Replicate
05:00 jiku and in both webservers, can see few file not found (image files)
05:01 jiku http://fpaste.org/124827/14078196/
05:01 glusterbot Title: #124827 Fedora Project Pastebin (at fpaste.org)
05:02 jiku http://fpaste.org/124828/14078197/
05:02 glusterbot Title: #124828 Fedora Project Pastebin (at fpaste.org)
05:02 JoeJulian probably deleted by the time your stat got to it.
05:02 JoeJulian I was referring, by the way, to the cli heal command. "gluster volume heal $vol full"
05:03 jiku ok
05:03 JoeJulian shouldn't functionally be all that different though.
05:05 ppai joined #gluster
05:07 hagarth joined #gluster
05:08 kkeithley joined #gluster
05:09 bfoster joined #gluster
05:10 prasanth_ joined #gluster
05:14 sas joined #gluster
05:14 saurabh joined #gluster
05:23 karnan joined #gluster
05:24 deepakcs joined #gluster
05:28 lalatenduM joined #gluster
05:33 rastar joined #gluster
05:37 raghu joined #gluster
05:38 lyang0 joined #gluster
05:39 DV_ joined #gluster
05:42 bmikhael joined #gluster
05:46 msvbhat joined #gluster
05:49 jiku JoeJulian, performed a volume heal on both the server and then volume stop/start
05:49 DV_ joined #gluster
05:50 jiku mounted volume to the client
05:50 jiku when traffic arrives, 1.5 - 2.0 is the load..
05:51 jiku another website 12G in size running in 3.3 gluster runs in no load at all..
05:51 jiku 3.5 gluster is taking care of just 150 MB of data
05:54 overclk joined #gluster
05:56 ndarshan joined #gluster
05:57 karnan joined #gluster
05:58 bala joined #gluster
06:01 dusmant joined #gluster
06:02 JustinClift joined #gluster
06:03 kshlm joined #gluster
06:04 msvbhat joined #gluster
06:05 dblack joined #gluster
06:05 kkeithley joined #gluster
06:05 bennyturns joined #gluster
06:06 portante joined #gluster
06:08 rturk|afk joined #gluster
06:10 mbukatov joined #gluster
06:14 overclk joined #gluster
06:17 aravindavk joined #gluster
06:18 jiku joined #gluster
06:19 Zahra joined #gluster
06:36 nshaikh joined #gluster
06:43 ricky-ti1 joined #gluster
06:49 aravindavk joined #gluster
06:49 overclk joined #gluster
06:54 ctria joined #gluster
07:02 ekuric joined #gluster
07:07 rastar joined #gluster
07:08 shubhendu__ joined #gluster
07:11 keytab joined #gluster
07:14 glusterbot New news from newglusterbugs: [Bug 1123294] [FEAT] : provide an option to set glusterd log levels other than command line flag <https://bugzilla.redhat.co​m/show_bug.cgi?id=1123294>
07:16 itisravi joined #gluster
07:22 nbalachandran joined #gluster
07:28 fsimonce joined #gluster
07:28 jiku joined #gluster
07:34 deepakcs joined #gluster
07:38 ndarshan joined #gluster
07:43 karnan joined #gluster
07:43 bala joined #gluster
07:44 sputnik13 joined #gluster
07:44 glusterbot New news from newglusterbugs: [Bug 1126734] Writing data to a dispersed volume mounted by NFS fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1126734>
07:47 andreask joined #gluster
07:57 dusmant joined #gluster
08:01 nbalachandran joined #gluster
08:09 sahina joined #gluster
08:16 nshaikh joined #gluster
08:32 Pupeno joined #gluster
08:33 shubhendu__ joined #gluster
08:34 Pupeno I'm about to add a new brick to a cluster that has 5m files taking 300gb. I never done it with so much data. Anything I should now? How do I monitor the initial data copy?
08:37 nishanth joined #gluster
08:39 hflai joined #gluster
08:39 harish_ joined #gluster
08:53 shubhendu__ joined #gluster
08:58 vimal joined #gluster
08:59 andreask joined #gluster
09:18 LebedevRI joined #gluster
09:21 qdk joined #gluster
09:31 Chr1s1an Anyone got any recommendation in regards to setting off space in order to plan ahead for LVM snapshot feature that might be supported in the future ?
09:32 Slashman joined #gluster
09:33 Chr1s1an I’m thinking about setting off like 5-10% of each brick for this but unsure if we need that much.Each brick is on its own LVM and is 37TB currently  , so 10% is 3.7TB and that’s a lot of space for snapshots.
09:37 mdavidson joined #gluster
09:42 Pupeno When I add a replica, when does it start to copy files to the new server?
09:44 spandit joined #gluster
09:49 lezo_ joined #gluster
09:50 JoeJulian Pupeno: what command did you use?
09:51 Pupeno JoeJulian: gluster volume add-brick my-volume replica 2 the-other-server:/path
09:51 JoeJulian And it was a single-brick volume before?
09:51 Pupeno I see the copy now started... any way to monitor it know when they are in sync?
09:51 Pupeno It was a single brick.
09:51 JoeJulian There's not really any good way. It should show up under "gluster volume heal $vol info" what's left to be healed.
09:52 Pupeno So... when that command outputs nothing, the servers are in sync?
09:53 JoeJulian That's the story.
09:53 JoeJulian You can't necessarily tell with df or du, since the process of differential healing will create sparse files.
09:54 Pupeno Cool. Thanks.
09:54 samsaffron joined #gluster
09:54 Pupeno How bad is to turn gluster on/off reboot a freshly added brick?
09:54 JoeJulian I generally just do spot checks. sha1sum's of heads and tails of too-big-to-check files, etc.
09:55 JoeJulian Should be fine. The other way around could make problems, but rebooting the new one shouldn't be.
09:56 kshlm joined #gluster
09:57 Pupeno Cool, thanks.
09:57 lezo_ joined #gluster
09:59 fyxim_ joined #gluster
10:02 edward1 joined #gluster
10:22 ppai joined #gluster
10:35 jiffin joined #gluster
10:44 kkeithley1 joined #gluster
10:51 shubhendu joined #gluster
10:54 uebera|| joined #gluster
10:59 spandit joined #gluster
11:18 andreask joined #gluster
11:19 jcsp joined #gluster
11:26 ira joined #gluster
11:36 sputnik13 joined #gluster
11:38 atrius` joined #gluster
11:38 gildub joined #gluster
11:38 harish_ joined #gluster
11:55 chirino joined #gluster
11:55 mojibake joined #gluster
11:57 diegows joined #gluster
12:05 tdasilva joined #gluster
12:15 dusmant joined #gluster
12:22 B21956 joined #gluster
12:43 ppai joined #gluster
12:44 getup- joined #gluster
12:57 hagarth joined #gluster
13:04 julim joined #gluster
13:10 simulx joined #gluster
13:22 dusmant joined #gluster
13:24 julim joined #gluster
13:24 getup- joined #gluster
13:28 msmith_ joined #gluster
13:29 kkeithley1 joined #gluster
13:33 andreask joined #gluster
13:34 bene2 joined #gluster
13:38 chirino joined #gluster
13:46 plarsen joined #gluster
13:49 zerick joined #gluster
13:55 sahina joined #gluster
14:01 julim joined #gluster
14:01 skippy JoeJulian: any thoughts regarding this? http://supercolony.gluster.org/pipermai​l/gluster-users/2014-August/018400.html  adding bricks causes client problems
14:01 glusterbot Title: [Gluster-users] adding bricks to replica volume causes client failure (at supercolony.gluster.org)
14:06 an joined #gluster
14:16 maniac joined #gluster
14:17 maniac hi guys ,, i'm facing a problem with gluster that after reboot it doesn;t mount volume
14:17 maniac but when i execute mount -a manually
14:17 wushudoin joined #gluster
14:17 maniac it's ok
14:17 maniac probably you've know solution ?
14:22 skippy maniac: what's your /etc/fstab entry look like?
14:22 maniac 192.168.201.186:/media  /mnt/media              glusterfs       defaults,_netdev        0 0
14:23 maniac i'm also addedd to sysconfig/network-scripts/<interface-name> LINKDELAY=20
14:24 rotbeard joined #gluster
14:26 theron joined #gluster
14:27 maniac glusterfs verision : glusterfs 3.5.2 built on Jul 31 2014 18:47:52
14:27 theron joined #gluster
14:28 skippy no ideas, maniac.  seems like that should work.
14:31 maniac defaults,_netdev options should be enough ?
14:31 skippy i should think so.
14:31 skippy out of wild curiosity, do things change if you put _netdev first?
14:35 maniac skippy, ithink i've found … netfs service was stopeed
14:35 maniac and netdev option is not working in that case
14:35 skippy ah!
14:35 skippy yes, good catch
14:35 maniac i'm checking this right now
14:37 maniac yeah that fixed it
14:37 maniac thanks
14:39 DanF left #gluster
14:43 maniac left #gluster
14:45 skippy adding bricks seems to cause my client to flake out. :(
14:47 an joined #gluster
14:48 skippy [2014-08-12 14:43:09.753480] I [dht-layout.c:727:dht_layout_dir_mismatch] 0-t1-dht: / - disk layout missing
14:48 skippy [2014-08-12 14:43:09.753491] I [dht-common.c:635:dht_revalidate_cbk] 0-t1-dht: mismatching layouts for /
14:54 ira joined #gluster
14:57 frosty joined #gluster
14:57 frosty Does anyone know where one can rsync the gluster yum repository?
14:58 semiosis @latest
14:58 glusterbot semiosis: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
14:58 semiosis frosty: ^
15:00 Humble joined #gluster
15:00 hchiramm_ joined #gluster
15:01 frosty Doesn't quite help with rsync though.
15:01 nbalachandran joined #gluster
15:01 * frosty tried to run: rsync -avSHP --bwlimit=1024 --delete-after --delay-updates --fuzzy download.gluster.org::pub/gluster​/glusterfs/LATEST/CentOS/epel-6/
15:02 frosty Didn't work at all, timed out.
15:02 frosty semiosis: so, not quite what I was after, sadley. :(
15:09 daMaestro joined #gluster
15:09 skippy if i add bricks to a volume, a client with that volume mounted blows up. If I unmount, add bricks, then re-mount, things work.  Though permissions on the mount did not persist: had to `chgrp g+w` the volume again.
15:11 semiosis frosty: what are you trying to do?
15:13 skippy any ideas why adding bricks to a mounted volume causes a client to puke?
15:14 semiosis skippy: istr a bug about that. what version are you using?
15:14 skippy glusterfs 3.5.2 built on Jul 31 2014 18:41:18
15:14 semiosis hmm, that should be good
15:14 semiosis maybe JoeJulian knows more, i think he ran into that recently
15:15 semiosis he should be around soon
15:15 skippy i posted to mailing list, and there's this Gist: https://gist.github.com/skpy/1fb1297815d0b02df326
15:15 JoeJulian Looks to be something different. I was about to mention the same bug.
15:15 glusterbot Title: gluster add bricks.md (at gist.github.com)
15:15 frosty semiosis: Pull down the entire CentOS repository once, then install it on a bunch of machines over and over again (Essentially, trying to save internet bandwidth for both sides)
15:15 JoeJulian Can you try the same test with 3.4.5?
15:15 skippy i can. that'll take me a little bit to prep.
15:15 JoeJulian I'm off to a Dr. appt. be back in a little over an hour.
15:15 semiosis good luck with that
15:16 JoeJulian It's PT. Keeps me young.
15:17 semiosis frosty: i'd use wget for that, which this page seems to describe nicely: http://fosswire.com/post/2008/04/cre​ate-a-mirror-of-a-website-with-wget/
15:17 glusterbot Title: Create a mirror of a website with Wget | FOSSwire (at fosswire.com)
15:17 JoeJulian Because apparently sitting in a chair for a living isn't good for your body. Who would have guessed?
15:17 semiosis frosty: my guess would be rsync was trying to connect to an rsync server, which probably doesnt exist on that host
15:18 semiosis JoeJulian: get a standing desk
15:18 JoeJulian Got one that I can do either way. It's very nice.
15:19 frosty semiosis: Most likely doesn't, yep. If http is the only option, I may just script something up to mimic rsync over http.
15:19 semiosis or use wget
15:19 semiosis or set up a caching proxy if you want to get fancy
15:19 semiosis https://www.centos.org/docs/5/ht​ml/yum/sn-yum-proxy-server.html
15:19 glusterbot Title: Managing Software with yum (at www.centos.org)
15:20 frosty xP (wget would pull down all the indexes as index.html, which aren't needed in an apache/nginx file server setup)
15:21 frosty Anyways, I'm off to go do that after more serious work :)
15:21 semiosis there must be a way to have wget filter the urls
15:22 semiosis i think the proxy server is the "right" way to do this though
15:30 theron_ joined #gluster
15:33 ndevos @later tell frosty I <3 lftp for mirroring from http servers
15:33 glusterbot ndevos: The operation succeeded.
15:34 theron joined #gluster
15:35 theron joined #gluster
15:37 sputnik13 joined #gluster
15:39 frosty joined #gluster
15:42 rwheeler joined #gluster
15:47 mariusp joined #gluster
15:51 msmith_ joined #gluster
15:52 mariusp Hi all, can someone point me in the right direction in finding documentation or other information (white papers, proof of concepts, etc.) in regards to using glusterfs for virtual environments (small and medium: ranging from 4 physical servers up to 16 nodes)?
15:54 glusterbot New news from resolvedglusterbugs: [Bug 1043373] brick server kernel panic on ext4 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1043373>
15:55 bmikhael joined #gluster
15:56 XpineX joined #gluster
16:01 tdasilva joined #gluster
16:02 mariusp joined #gluster
16:09 dtrainor joined #gluster
16:11 _pol joined #gluster
16:16 Peter3 joined #gluster
16:20 ninthBit joined #gluster
16:20 theron_ joined #gluster
16:21 ninthBit what would this error in my etc-glusterfs-glusterd.vol.log mean?   E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused)
16:22 ninthBit how would i go about finding out the socket port numbers this might be expecting?  it possibly could be firewall issues .. but the two peers are connecting and working but maybe there is something wrong
16:23 msmith_ joined #gluster
16:24 glusterbot New news from resolvedglusterbugs: [Bug 1069191] geo-rep: gsyncd worker process crash <https://bugzilla.redhat.co​m/show_bug.cgi?id=1069191>
16:24 ninthBit in my google search i came by this https://bugzilla.redhat.com/show_bug.cgi?id=977497  which i have turned of NFS as we are not going to use it
16:24 glusterbot Bug 977497: unspecified, high, 3.4.0, kparthas, POST , gluster spamming with E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused) when nfs daemon is off
16:27 firemanxbr joined #gluster
16:28 semiosis ninthBit: connection refused means the server returned a TCP RST.  this is usually for one of these reasons: 1) no daemon is listening on that port, 2) iptables is blocking the port with REJECT, or 3) you have the wrong server address or there's an IP conflict for that server
16:28 ninthBit I have another question.  Is there any negative impacts to using gluster commands continuously every ten minutes.  like "volume rebalance status", "peer status", "volume heal info", "volume heal info split-brain", "volume heal info heal-failed", "volume status clients" ?  their output is dumped for monitoring.
16:29 semiosis shouldn't be.  if that causes a problem it's a bug, please report
16:30 semiosis your second question earlier, how to find out port numbers... glusterd management port is 24007.  you can also use netstat or tcpdump to find out what ports are used
16:30 ninthBit semiosis: i am thinking the connection error in the log is related to the bug entry i found and it is flooding my logs.  i have nfs turned off on the volume.  any way i can turn off what is trying to connect to the NFS daemon?
16:31 _Bryan_ joined #gluster
16:32 semiosis upgrade to 3.5.2?
16:32 semiosis bug 977497 suggests it's fixed in 3.5
16:32 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=977497 unspecified, high, 3.4.0, kparthas, POST , gluster spamming with E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused) when nfs daemon is off
16:33 neofob joined #gluster
16:34 ninthBit semiosis: re command question - i see at times the commands can't run with the error "another transaction is in progress. please try again after sometime."  not a problem but just have concerns as i do have some issues with my healstatus and tons of files showing up.  and this is related to the question i had yesterday about finding the same file (path and name) on both replica-set volumes in a distributed volume.  one file on a set is
16:34 ninthBit i may have to take the hit and test out 3.5 (B)
16:35 semiosis previous message truncated at "one file on a set is c"
16:35 theron joined #gluster
16:35 ninthBit ah, i'll work on that last message again. i word things far to long...
16:36 ninthBit ok, in a distributed replica-set volume (2).  exact same path/file shows up on both replica-sets. one set has t-bit 0bytes for file. the other has the data.
16:36 ninthBit mounted volume returns the correct file and data but the heal-status reports these files for days and does not clear up.
16:37 ninthBit i am tryin gout the fix-layout and rebalance right now to see if tha twill help.
16:37 semiosis the 0 byte is a ,,(linkfile)
16:37 glusterbot I do not know about 'linkfile', but I do know about these similar topics: 'landfill'
16:37 Peter3 i am at 3.5.2 now and still getting these setattr error
16:37 Peter3 http://pastie.org/9467472
16:37 semiosis meh
16:37 glusterbot Title: #9467472 - Pastie (at pastie.org)
16:38 Peter3 how am i still getting these ?E [marker.c:2482:marker_setattr_cbk
16:38 ninthBit i am seeing setattr errors too!!! i just found it and was going to post it
16:38 Peter3 is that a user error or gluster?
16:38 ninthBit Peter3: strange how the events in life meet at the same time
16:38 Peter3 YES!
16:38 semiosis @learn linkfile as A zero-length file with mode T--------- on a brick is a "link file." It has xattrs pointing to another brick/path where the file data resides. These are usually created by renames or volume layout changes.
16:38 ninthBit i am on 3.4.5 and these are files written from an FTP service
16:38 glusterbot semiosis: The operation succeeded.
16:39 semiosis ninthBit: ,,(linkfile)
16:39 glusterbot ninthBit: A zero-length file with mode T--------- on a brick is a link file. It has xattrs pointing to another brick/path where the file data resides. These are usually created by renames or volume layout changes.
16:39 Peter3 i have been getting these msg since 3.5.0 and not sure if that's gluster or user
16:39 ninthBit semiosis: i will look into that. does layout changes happen automatically as i have never run a layout change manually
16:39 Peter3 sometimes it also has failed to mkdir and others
16:39 semiosis ninthBit: add-brick, remove-brick, these change the layout.  like i said tho, linkfiles can be created by renames too
16:40 ninthBit Peter3: would you happen to be using Samba on top of the gluster-client (fuse) mount?
16:40 Peter3 http://pastie.org/9467480
16:40 glusterbot Title: #9467480 - Pastie (at pastie.org)
16:40 Peter3 nope
16:40 Peter3 just NFS
16:40 Peter3 for now
16:40 Peter3 these are mkdir errors
16:40 Peter3 happens everyday
16:40 Peter3 all time
16:41 Peter3 those are from the  brick log
16:41 cfeller joined #gluster
16:41 ninthBit semiosis: in regards to the t-bit files would these be resolved with the volume rebalance or fix layout?
16:42 semiosis i think a rebalance would get rid of them.  not positive though.
16:42 ninthBit Peter3: i also see them in my logs but in very high volume with the ftp uploaded files
16:42 Peter3 what filesystem u using for ur brick?
16:42 Peter3 i have xfs
16:42 ninthBit would these setattr be something of high concern?  i have not looked into it enough to know.
16:42 Peter3 ubuntu
16:42 ninthBit peter3: xfs on ubuntu
16:42 Peter3 so we are the same
16:43 ninthBit peter3: ubuntu 12.04 with all updates applied
16:43 ninthBit server
16:43 Peter3 o yes me too!!
16:43 Peter3 12.04
16:43 Peter3 semiosis: do you have these errors on ur brick log?
16:44 ninthBit Peter3: do you also see "marker_setattr .... Operation not permitted occurred during setattr of <null> ?
16:44 Peter3 YES!
16:44 ninthBit Peter3: i pull my gluster from semios repo is this where you get yours?
16:44 Peter3 totally :)
16:44 semiosis Peter3: no errors in my brick logs.
16:45 theron joined #gluster
16:45 Peter3 semiosis: you have nfs or pure gfs?
16:45 Peter3 what filesystem you use for ur bricks?
16:45 semiosis xfs bricks, native fuse clients
16:46 semiosis nfs is running, i just dont use it
16:46 Peter3 ic
16:48 mariusp joined #gluster
17:08 gmcwhistler joined #gluster
17:08 bala joined #gluster
17:17 daMaestro joined #gluster
17:22 nishanth joined #gluster
17:31 daMaestro joined #gluster
17:31 mojibake1 joined #gluster
17:35 lalatenduM joined #gluster
17:37 ninthBit i am getting nervous about the heal status now.  layout fix and rebalance did not resolve any of the files in the heal status.
17:44 ninthBit semiosis: you said something about file renames as a possible entry into the heal status that i am seeing. as it turns out many of the files i see are result of rename operations
17:44 ninthBit i am going to have to setup a better test with the same client in my dev gluster setup.
17:44 Philambdo joined #gluster
17:48 ira joined #gluster
17:50 bala joined #gluster
17:53 ninthBit what is the method to view the extended attributes on files on a brick?  i would like to view the t-bit files extended attributes.
17:53 ramteid joined #gluster
17:54 ninthBit ah, i think i found it
17:54 ninthBit getfattr ? maybe
17:57 semiosis sweet.  i love being right.  almost as much as being wrong
17:58 kkeithley1 oh? do tell?
17:58 semiosis that was re: [13:44] <ninthBit> semiosis: you said something about file renames as a possible entry into the heal status that i am seeing. as it turns out many of the files i see are result of rename operations
17:59 semiosis ninthBit: ,,(extended attributes)
17:59 glusterbot ninthBit: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
18:10 BossR joined #gluster
18:18 bmikhael how does gluster perform read operation
18:22 adria joined #gluster
18:22 semiosis bmikhael: could you clarify?
18:23 ninthBit semiosis: i see with the getfattr the t-bit files have the attribute for trusted.glusterfs.dht.linkto which affirms again what you said about the t-bit files with gluster.  now, next project is to see if the search on gfid scripts work :)
18:23 semiosis the ,,(gfid resolver) should work
18:23 glusterbot https://gist.github.com/4392640
18:23 semiosis or perhaps you mean something else
18:24 bmikhael @semiosis i want to understand how does gluster-client read data from gluster-server
18:24 semiosis bmikhael: at mount time the client makes connections to all bricks.  then when a file is read, it goes over the network to the bricks to read the file data.
18:25 skippy semiosis, JoeJulian: you suggested I try an older version to diagnose my "adding bricks freaks out clients" problem.  Which version do you recommend?
18:25 semiosis the latest 3.4, which is now 3.4.5
18:25 skippy is 3.5.2 too new?
18:27 semiosis the bug we're thinking of *should* have been fixed in 3.5.2 & 3.4.5, so if you can reproduce your problem in both, then it may be a different issue.  if you can not reproduce it in 3.4.5 then maybe the bug wasnt fixed in 3.5.2 like it should be
18:27 bmikhael @semiosis i knew that gluster assigns 32-bit hash for every brick, and every file on the brick get one of these hashes, and this hash is stored in the xattr of every file. My question is how gluster send the read request, which function in the gluster code do that or which translator do that, and how the server respond
18:28 semiosis bmikhael: the code is divided into 'translators' which handle different parts of the transaction. the network calls use RPC (or so I've heard) and there are client & server translators to handle the network IO
18:29 semiosis bmikhael: there's other xlators to do things like read from disk (posix) locate a file on one brick out of many (dht) etc
18:30 bmikhael @semiosis i'm targeting dht translator, but i want to modify the read process for dht
18:31 bmikhael @semiosis i want it to read from a caching daemon running on every server before it tries to search for the file on every brick
18:32 semiosis bmikhael: why?
18:33 bmikhael @semiosis i have read in a white paper done by university of Ohio, that caching drastically improve gluster read performance
18:35 semiosis are you talking about this? http://mvapich.cse.ohio-state.edu/static/med​ia/publications/abstract/noronha-icpp08.pdf
18:35 bmikhael @semiosis yes
18:38 semiosis that paper is 6 years old
18:39 semiosis glusterfs has changed a lot since then
18:40 ekuric joined #gluster
18:41 bmikhael @semiosis i know it was done in 2008, but i like the concept of caching as this is the main reason behind any SAN FS performance, in SAN FS like Lustre it uses centralized metadata server, but gluster do not, this is good for gluster from the point of scalability, but it should have penalty on performance
18:42 bmikhael @semiosis so i think if we made caching server just to hold key/value, the key is the file name, the value is the 32-bit hash of the file, gluster read performance should improve drastically
18:43 wushudoin left #gluster
18:43 semiosis ok, let me try to clear up some confusion...
18:44 bmikhael @semiosis as i think for gluster to do a read operation it broadcast the read request to all the servers the peer list and wait till every server reply with the hash
18:44 semiosis first, the hash is of the filename, not the file data. the client can compute the hash without having to go over the network.  in fact, it does this so that it knows which bricks to ask over the network
18:44 mojibake joined #gluster
18:44 semiosis now, if the brick where a file should be does not have the file, then the client will poll the other bricks, and it will save the result in a ,,(linkfile) where the file should be
18:44 glusterbot A zero-length file with mode T--------- on a brick is a link file. It has xattrs pointing to another brick/path where the file data resides. These are usually created by renames or volume layout changes.
18:45 semiosis this is only a performance concern if your application does a lot of lookups for files that do not exist, since that will necessarily poll the servers
18:46 semiosis jdarcy did some work on caching negative lookup results a couple years ago, you can find his work here: https://github.com/jdarcy/negative-lookup
18:46 glusterbot Title: jdarcy/negative-lookup · GitHub (at github.com)
18:47 semiosis something like a poorly configured php server would do lots of negative lookups when it searched for a php file in each directory on the include path until it found the file.  correctly configured servers have the most common path first, to minimize the neg lookups
18:47 semiosis that's the only common use case I know of where it might be an issue
18:47 bmikhael @semiosis i've seen this, but he said that it is not ready for production, and it seems that he is not interested in developing it
18:48 bmikhael @semiosis is there a documentation or tutorial on how gluster internally works, like how does it read, write, lookup for data, etc...
18:48 semiosis yes, also written by jdarcy, and there's links at the bottom of that negative-lookup page i just linked you
18:50 bmikhael @semiosis i've read this before, but i could not do any thing as i do not understand how does gluster work internally
18:50 ninthBit semiosis: i have found your script to search for a file based upon its gfid. the trick is i have a gfid from the getfattr which is in hex. it appears to not work with the hex gfid. any tricks. i am trying to self solve this at the same time...
18:50 semiosis ninthBit: iirc you just need to add dashes
18:51 semiosis the resolver works by following hard links (by searching for the inode number)
18:51 ninthBit ok, i'll see what happens... oh, it just deleted my whole brick.... j/k
18:52 semiosis hah
18:53 mojibake joined #gluster
18:54 semiosis bmikhael: afaik that stuff from jdarcy is the best intro to developing glusterfs internals.
18:55 semiosis and the negative-lookup stuff does almost exactly what you're looking for
18:55 gmcwhistler joined #gluster
18:59 ninthBit semiosis: yes that worked and i found a file with the gfid yay.
19:00 semiosis \o/
19:09 DV joined #gluster
19:10 theron joined #gluster
19:14 etaylor_ joined #gluster
19:15 doekia joined #gluster
19:16 etaylor_ Hello all.  I want to setup 3 gluster servers to store DocumentRoot for Apache.  How do I setup my gluster clients to read from the 3 gluster servers.  As you can tell, I'm basically trying to implement a active/active storage solution.
19:20 _dist joined #gluster
19:21 _dist JoeJulian: I've been away for a bit, any chance add/remove brick stuff has been patched in a new version? :)
19:22 skippy semiosis, JoeJulian: just rebuilt my replica 2 volume with Gluster 3.4.5.  added another pair of bricks.  client puked again. :(
19:24 skippy https://gist.github.com/skpy/80b09832f2f6717d0fe8
19:24 glusterbot Title: gist:80b09832f2f6717d0fe8 (at gist.github.com)
19:35 semiosis _dist: we were just talking about that.  3.4.5 & 3.5.2 should have the patch
19:35 semiosis skippy: sure all servers & clients have the same version of glusterfs?
19:37 _dist semiosis: for a few reasons (gluster included) I'm going to drop off from proxmox and go to a more "self build" of debian/kvm
19:37 semiosis etaylor_: mount your clients & set the DocumentRoot to the client mount point?  this answer seems too easy
19:37 _dist I will use virsh, so hopefully libvirt finally has good storage domain for glusterfs. However, I don't want to bother making the switch if add/remove bricks isn't 'safe' while online
19:37 skippy semiosis: https://gist.github.com/skpy/80b0​9832f2f6717d0fe8#comment-1280350
19:38 glusterbot Title: gist:80b09832f2f6717d0fe8 (at gist.github.com)
19:38 skippy same version, yes.  Servers are RHEL7.  Test client is RHEL6.5
19:38 semiosis skippy: and everyone got rebooted since installing?
19:38 skippy no.
19:38 semiosis no chance old version is still running
19:38 semiosis s/no /any /
19:38 semiosis glusterbot: meh
19:38 glusterbot What semiosis meant to say was: Peter3: any errors in my brick logs.
19:38 glusterbot semiosis: I'm not happy about it either
19:39 skippy semiosis: shut down daemons.  yum removed. yum clean all.  updated repo.  installed.  peered. created volume.
19:39 etaylor_ @semiosis, What happens if the hosts I'm mount to crashes?  I don't want to have to mount to indiviual servers.
19:39 skippy oh, i also deleted /var/lib/gluster/* and /etc/gluster/*
19:40 etaylor_ Is there some sort of load balancer I can use?
19:42 semiosis etaylor_: ,,(mount server)
19:42 glusterbot etaylor_: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
19:42 semiosis the native fuse client is magically HA
19:42 etaylor_ Thanks guys.  I'm just now learning Gluster.
19:43 semiosis yw, feel free to keep asking questions, but the best way to learn is to try it
19:43 semiosis and if you run into problems, we're here to help
19:44 etaylor_ What happens if the server crashes or is taken offline?  How is the client able to connect to the other Gluster nodes?
19:44 semiosis the client is always connect to all bricks
19:44 semiosis if one goes down it keeps going with the rest
19:44 etaylor_ Awesome.  I have some testing to do.
19:46 skippy /var/log/glusterfs/glustershd.log from one of the servers: https://gist.github.com/skpy/80b0​9832f2f6717d0fe8#comment-1280363
19:46 glusterbot Title: gist:80b09832f2f6717d0fe8 (at gist.github.com)
19:51 ekuric joined #gluster
19:59 ninthBit joined #gluster
20:00 ninthBit i attempted a reboot on each peer to see if this helped clear out the heal status.  The healstatus still has the same files listed.  getting less ideas on what is going on. will look into the logs and see if i can get any data out of those.
20:01 ninthBit brb have to reboot workstation.
20:01 mhoungbo joined #gluster
20:03 _pol_ joined #gluster
20:03 ninthBit joined #gluster
20:08 gildub joined #gluster
20:12 skippy regarding my "add bricks" challenge:  might it be the case that the problem(s) stem from adding bricks that reside on the only two servers in the replica set?
20:12 skippy if I were adding bricks on a third server, would this be expected to work differently?
20:13 skippy this line is what suggests this to me:
20:13 skippy [2014-08-12 20:11:32.442419] I [afr-lk-common.c:1075:afr_lock_blocking] 1-t0-replicate-1: unable to lock on even one child
20:14 skippy i have 2 servers, each hosting one brick in the replica volume. I'm trying to add 2 more replicated bricks from these servers to the volume.
20:15 ninthBit ok, i'll have to resume this heal status later and perhaps become a member of the gluster-user group and duplicate efforts there.
20:16 mariusp joined #gluster
20:42 georgeh|workstat joined #gluster
20:44 rotbeard joined #gluster
20:46 semiosis skippy: what you're trying to do should work ok.  there's either some problem with your setup, or a bug.
20:47 skippy semiosis: I'd love to identify what i'm doing wrong.
20:49 ira_ joined #gluster
20:50 B21956 joined #gluster
20:52 skippy oh sure, recreating all my work from scratch in order to open a bug report, and EVERYTHING WORKS FINE!
20:52 JoeJulian I hate when that happens.
20:58 XpineX joined #gluster
20:58 ira_ joined #gluster
21:00 _dist JoeJulian: I was in here earlier and got the feeling that 3.5.2 now has the add/remove birck on a live volume problem fixed, would you say it's safe now? I'm going to abandon proxmox to move to straight debian/gluster to get off 3.4.2-1 (other reason too though)
21:00 skippy _dist: I'm fighting a problem now with adding bricks to a mounted volume with 3.5.2 :(
21:00 _pol joined #gluster
21:01 semiosis skippy: sounds like progress :)
21:05 _dist skippy: I had that when removing one in 3.4.2-1, as a result I had to strip a lot of xattrs (JoeJulian gave me a hand) but I still have dozens of files with warnings and I can't get to the bottom of it https://bugzilla.redhat.co​m/show_bug.cgi?id=1125418 <--
21:05 glusterbot _dist: <'s karma is now -3
21:05 glusterbot Bug 1125418: high, unspecified, ---, gluster-bugs, NEW , Remove of replicate brick causes client errors
21:06 _dist I'll stop using those arrows :)
21:09 Philambdo joined #gluster
21:09 sputnik13 joined #gluster
21:14 JoeJulian _dist: Nope, not yet I wouldn't. I've got two more bugs before I consider it safe.
21:16 _dist JoeJulian: So to safely add a brick I'd need to stop all client access, maybe even the volume?
21:16 mariusp joined #gluster
21:16 ThatGraemeGuy joined #gluster
21:18 XpineX joined #gluster
21:18 skippy problem recurred when I switched back to 3.5.2: https://bugzilla.redhat.co​m/show_bug.cgi?id=1129486
21:18 glusterbot Bug 1129486: high, unspecified, ---, gluster-bugs, NEW , adding bricks to mounted volume causes client failures
21:19 glusterbot New news from newglusterbugs: [Bug 1120815] df reports incorrect space available and used. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1120815> || [Bug 1129486] adding bricks to mounted volume causes client failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1129486>
21:20 skippy wow, I really botched that bugzilla entry.  failed to fill out the text form properly.
21:20 skippy sorry about that, everyone.  I suck.
21:20 JoeJulian It happens.
21:38 Pupeno joined #gluster
21:38 Pupeno joined #gluster
22:22 _pol_ joined #gluster
22:53 wushudoin| joined #gluster
22:56 julim joined #gluster
23:20 mariusp joined #gluster
23:24 siel joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary