Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 shyam joined #gluster
00:14 nangthang joined #gluster
00:24 TheCthulhu joined #gluster
00:58 calisto joined #gluster
00:59 bennyturns joined #gluster
01:04 Pupeno joined #gluster
01:05 victori joined #gluster
01:21 carem joined #gluster
01:22 nzero joined #gluster
01:28 Lee1092 joined #gluster
01:35 Pupeno_ joined #gluster
01:37 ron-slc joined #gluster
01:37 nangthang joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 cc1 joined #gluster
02:07 shyam joined #gluster
02:09 gem joined #gluster
02:12 morph- joined #gluster
02:12 morph- hello
02:12 glusterbot morph-: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:13 morph- does any one have any tips for performance tuning gluster for small files
02:14 JoeJulian @php
02:14 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
02:14 kevein joined #gluster
02:15 DV__ joined #gluster
02:15 JoeJulian morph-: how do you define small, and what are your expectations?
02:15 morph- we have about 2 million files on two replicates volumes
02:15 morph- all ~100KB
02:16 morph- the servers are directly connected via a switch
02:16 morph- running glusterfs-3.6.2-2.el6.x86_64
02:17 cc1 left #gluster
02:18 morph- performance is about 4x slower than native nfs
02:18 morph- rsync processes are only managing to write on average 8MB/s
02:18 morph- we're doing a restore operation of an older file system currently
02:19 JoeJulian One rpc round trip on an 11 frame file (assuming jumbo frames) is a 9% overhead. That's not what you're describing.
02:20 JoeJulian Ah, rsync... now we're adding what... 4 round trips. More if the file already exists.
02:20 morph- :/
02:22 haomaiwang joined #gluster
02:22 morph- http://pastebin.ca/3080137
02:22 morph- these are our options
02:22 morph- set for the volume
02:22 morph- should I increase the outstanding-rpc-limit ?
02:23 sc0001_ joined #gluster
02:24 harish joined #gluster
02:28 JoeJulian cpio | pv |netcat has been my more recent preference.
02:31 aaronott joined #gluster
02:34 JoeJulian The rpc limit's not likely to bottleneck. You can avoid rsync, you can build a batch with rsync then cut it up and use it on multiple clients, you can enable flush-behind. You can set those timeout numbers mentioned by glusterbot to high numbers and enable fopen-keep-cache.
02:34 morph- thanks...
02:34 morph- i will give it a go.
02:34 JoeJulian Those are dangerous to use day-to-day, but they'll probably speed up your initial load.
02:35 JoeJulian If you do continue to use rsync, make sure you use --inplace, otherwise your volume will need rebalanced as soon as you're done.
02:36 morph- it's a replicated volume
02:36 morph- does that still require rebalance ?
02:36 morph- i thought it handles that automatically
02:36 JoeJulian Oh, right. I forgot you only have the two bricks.
02:36 haomaiwa_ joined #gluster
02:44 morph- JoeJulian: what would be some reasonable values for the fuse mount/?
02:44 morph- "HIGH"...
02:48 calisto joined #gluster
02:50 bharata-rao joined #gluster
02:51 edong23 joined #gluster
02:57 sc0001 joined #gluster
03:04 uebera|| joined #gluster
03:06 TheSeven joined #gluster
03:27 badone joined #gluster
03:33 nishanth joined #gluster
03:34 kdhananjay joined #gluster
03:38 kanagaraj joined #gluster
03:42 atinm joined #gluster
03:45 m0zes joined #gluster
03:46 RameshN joined #gluster
03:49 vmallika joined #gluster
03:52 nbalacha joined #gluster
03:52 shubhendu joined #gluster
04:03 badone joined #gluster
04:05 TheSeven joined #gluster
04:09 ppai joined #gluster
04:09 hagarth joined #gluster
04:11 RameshN joined #gluster
04:17 kotreshhr joined #gluster
04:20 nbalacha joined #gluster
04:20 gem joined #gluster
04:20 calisto joined #gluster
04:23 calavera joined #gluster
04:26 yazhini joined #gluster
04:29 jwd joined #gluster
04:32 jwaibel joined #gluster
04:33 ndarshan joined #gluster
04:33 overclk joined #gluster
04:38 hchiramm joined #gluster
04:38 glusterbot News from newglusterbugs: [Bug 1242708] fuse/fuse_thread_proc : The fuse_graph_sync function cannot be handled  in time after we fix-layout. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1242708>
04:43 pppp joined #gluster
04:43 hagarth joined #gluster
04:47 ramteid joined #gluster
04:52 rafi joined #gluster
04:54 sakshi joined #gluster
04:57 vikumar joined #gluster
05:06 meghanam joined #gluster
05:16 ashiq joined #gluster
05:17 Manikandan joined #gluster
05:17 hgowtham joined #gluster
05:18 SOLDIERz joined #gluster
05:28 vmallika joined #gluster
05:31 deepakcs joined #gluster
05:34 hchiramm joined #gluster
05:39 glusterbot News from newglusterbugs: [Bug 1247850] Glusterfsd crashes because of thread-unsafe code in gf_authenticate <https://bugzilla.redhat.co​m/show_bug.cgi?id=1247850>
05:40 nbalacha joined #gluster
05:42 Bhaskarakiran joined #gluster
05:43 sc0001 joined #gluster
05:45 atalur joined #gluster
05:45 rafi joined #gluster
05:47 arcolife joined #gluster
05:48 shubhendu joined #gluster
05:49 ndarshan joined #gluster
05:51 DV__ joined #gluster
05:54 jwd joined #gluster
05:55 jwaibel joined #gluster
05:57 kovshenin joined #gluster
05:59 kovshenin joined #gluster
05:59 kdhananjay joined #gluster
06:01 kshlm joined #gluster
06:02 beeradb joined #gluster
06:04 aravindavk joined #gluster
06:06 kayn__ joined #gluster
06:10 Saravana_ joined #gluster
06:12 Philambdo joined #gluster
06:18 gem joined #gluster
06:22 jtux joined #gluster
06:23 victori joined #gluster
06:28 Manikandan joined #gluster
06:29 uebera|| joined #gluster
06:33 jwd joined #gluster
06:33 raghu joined #gluster
06:37 kevein joined #gluster
06:38 victori joined #gluster
06:39 dusmant joined #gluster
06:42 nicky joined #gluster
06:42 ramky joined #gluster
06:46 sc0001 joined #gluster
06:51 skoduri joined #gluster
06:53 nangthang joined #gluster
06:54 jiffin joined #gluster
06:57 ndarshan joined #gluster
07:02 DV__ joined #gluster
07:05 shubhendu joined #gluster
07:22 Slashman joined #gluster
07:27 anil joined #gluster
07:31 sakshi joined #gluster
07:54 sakshi joined #gluster
07:56 sahina joined #gluster
08:00 ctria joined #gluster
08:07 jcastillo joined #gluster
08:09 ajames-41678 joined #gluster
08:17 kovsheni_ joined #gluster
08:17 smohan joined #gluster
08:23 kovshenin joined #gluster
08:28 ajames41678 joined #gluster
08:47 sakshi joined #gluster
08:47 deniszh joined #gluster
08:59 kdhananjay joined #gluster
09:00 dusmant joined #gluster
09:01 LebedevRI joined #gluster
09:03 kovshenin joined #gluster
09:09 glusterbot News from newglusterbugs: [Bug 1247917] ./tests/basic/volume-snapshot.t  spurious fail causing glusterd crash. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1247917>
09:15 ramky joined #gluster
09:22 kayn__ Hi guys. I'm testing gluster 3.7.2 on CentOS 6.6 with an arbiter (2 x 3) and it seems that there is a bug in removing bricks. "remove-brick" finished with 0 rebalanced-files and with failures. I tried identical scenario WITHOUT arbiter and removing bricks was successful. All files were rebalanced.
09:23 JoeJulian kayn__: interesting. Can you please file a bug report with your repro steps?
09:23 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
09:24 kayn__ JoeJulian: will do ;-)
09:25 s19n joined #gluster
09:28 [Enrico] joined #gluster
09:35 kdhananjay1 joined #gluster
09:41 nsoffer joined #gluster
09:42 Raven|2 joined #gluster
09:42 Raven|2 hi gluster community :)
09:42 maveric_amitc_ joined #gluster
09:44 Raven|2 i have a gluster in production since 2013, but i'm in 3.4.3 (without problem), and i would to go to 3.4.6 , the last version of production. I  only find doc to go to 3.5... can you help me ?
09:48 phoenixstew49 joined #gluster
09:50 jcastill1 joined #gluster
09:53 sage_ joined #gluster
09:55 jcastillo joined #gluster
09:56 dusmant joined #gluster
09:57 s19n Raven|2: I thought the latest 3.4.x was 3.4.7
09:57 ramky joined #gluster
09:58 s19n don't think there are particular upgrade instructions to move among 3.4.x versions
09:58 JoeJulian http://download.gluster.org/pub​/gluster/glusterfs/3.4/LATEST/
09:58 s19n anyone, feel free to correct me if I'm wrong
09:59 JoeJulian correct, there are no special instructions. If replicated servers, make sure everything is healed, upgrade a server, wait for self-heal to complete, upgrade another, etc.
10:00 JoeJulian If you can afford downtime, just stop the volume(s) and upgrade everything at once.
10:00 s19n I am still using 3.4.x too.
10:00 JoeJulian Me too
10:00 Telsin joined #gluster
10:00 frostyfrog joined #gluster
10:00 frostyfrog joined #gluster
10:00 malevolent joined #gluster
10:00 ashka joined #gluster
10:01 rp_ joined #gluster
10:01 wushudoin joined #gluster
10:01 ndarshan joined #gluster
10:01 beeradb joined #gluster
10:01 mbukatov joined #gluster
10:01 s19n joined #gluster
10:01 KennethDejonghe joined #gluster
10:01 timotheus1 joined #gluster
10:02 s19n have you ever seen yet-to-be-replicated 0-lenght files from a FUSE mount?
10:02 s19n I can see them quite frequently, though after stat-ing them they show the correct size
10:02 JoeJulian yes, but it's been so long I don't remember which version.
10:03 fsimonce joined #gluster
10:03 mikedep333 joined #gluster
10:03 akay1 joined #gluster
10:03 jon__ joined #gluster
10:05 Manikandan_ joined #gluster
10:06 and` joined #gluster
10:06 Raven|2 joined #gluster
10:06 jonb1 joined #gluster
10:07 overclk joined #gluster
10:07 Lee1092 joined #gluster
10:08 suliba joined #gluster
10:08 JPaul joined #gluster
10:16 Sjors joined #gluster
10:19 s19n interesting, I think I somehow am still affected. I'm afraid this is related to some "java.io.IOException: read past EOF" I am seeing
10:20 phoenixstew49 left #gluster
10:23 Raven|2 I see that Jessie official package is 3.5.2, I take an interest in this version
10:23 Raven|2 ?
10:25 JoeJulian 3.5.5 is the latest 3.5 version.
10:25 JoeJulian I wouldn't use anything with more bugs.
10:26 Raven|2 okay thank you for your support :)
10:26 Raven|2 best regards
10:26 Raven|2 ++
10:40 ira_ joined #gluster
10:46 jmeeuwen__ joined #gluster
10:47 jmeeuwen__ good morning
10:47 jmeeuwen__ i have a problem with a stale file handle i'm curious as to how to troubleshoot / resolve
10:49 jmeeuwen__ it is a distributed+replicated volume over 3x2 bricks
10:49 JoeJulian fuse or nfs?
10:49 jmeeuwen__ fuse
10:49 jmeeuwen__ two bricks report the file as "---------T.", two others report the file as "-rw-r--r--."
10:50 JoeJulian RHEL 6.4?
10:50 jmeeuwen__ RHEL 7
10:50 JoeJulian Well, not that bug then...
10:50 jmeeuwen__ with glusterfs-epel
10:51 jmeeuwen__ so 3.7.2-3.el7.x86_64
10:51 jmeeuwen__ should i try to update and cycle the systems over to 3.7.3-1.el7 perhaps?
10:52 jmeeuwen__ and/or update the running kernel to 3..10.0-229.7.2.el7?
10:52 jmeeuwen__ this is some puppet infrastructure so it is facilitatory and not crucial for day-to-day operations
10:53 jmeeuwen__ i can almost knock myself out fiddling with this ;-)
10:54 JoeJulian I haven't looked at the patch list for 3.7.3 but I don't remember seeing any ESTALE being mentioned recently.
10:54 JoeJulian Worth a shot though. The fuse bug was old, it shouldn't be in el7.
10:54 jmeeuwen__ ok, off i go update some of this
10:56 JoeJulian Hmm, that might fix it. There is a recent bug fix related to ESTALE.
10:56 hgowtham joined #gluster
11:00 samikshan joined #gluster
11:01 arcolife joined #gluster
11:01 tdasilva joined #gluster
11:02 DV joined #gluster
11:04 jmeeuwen__ ok, everything cycled, but stale files nonetheless
11:05 jmeeuwen__ so the fix isn't in 3.7.2 > 3.7.3, as i suppose we expected
11:06 JoeJulian You can read the files on the bricks without error?
11:07 jmeeuwen__ for two of the bricks, yes, for the other two where they are listed as zero-byte sized files with "---------T.", it doesn't have the expected contents
11:08 arcolife joined #gluster
11:08 jmeeuwen__ if it helps: http://fpaste.org/249314/43816811/
11:09 JoeJulian Yeah, the mode 1000 empty files are dht link pointers.
11:09 jmeeuwen__ on the client: ls: cannot access .git/index: Stale file handle
11:10 jmeeuwen__ ghe
11:10 JoeJulian see getfattr -m . -d $file
11:11 jmeeuwen__ so, from the one system called 'gfs07', first the "broken" and then the "good": http://fpaste.org/249315/68284143/
11:12 arcolife joined #gluster
11:13 arcolife joined #gluster
11:14 arcolife joined #gluster
11:15 JoeJulian not broken. The hash of the filename maps to puppet-replicate-1 but since it was renamed from some other temporary filename, it actually exists on puppet-replicate-0 as the hash for the temporary filename mapped there. Gluster's smart enough not to waste bandwidth moving files between bricks when a name change occurs, it just places a dht.linkto extended attribute on a special file entry so it knows where it's supposed to find it, "trusted.glus
11:15 JoeJulian terfs.dht.linkto="puppet-replicate-0""
11:16 glusterbot News from resolvedglusterbugs: [Bug 1226792] Statfs is hung because of frame loss in quota <https://bugzilla.redhat.co​m/show_bug.cgi?id=1226792>
11:16 firemanxbr joined #gluster
11:16 arcolife joined #gluster
11:16 JoeJulian You're going to have to check the client log to see where that error is coming from.
11:17 JoeJulian It's almost 4:30am and I haven't slept yet.... I've got to go sleep.
11:17 arcolife joined #gluster
11:18 harish_ joined #gluster
11:18 * JoeJulian drops the mic.
11:19 jmeeuwen__ have a good one, thanks!
11:20 arcolife joined #gluster
11:27 shubhendu joined #gluster
11:28 shyam joined #gluster
11:31 kshlm Weekly community meeting starts in 30 minutes on #gluster-meeting. Agenda: https://public.pad.fsfe.org/​p/gluster-community-meetings
11:48 arcolife joined #gluster
11:48 dgandhi joined #gluster
11:53 overclk joined #gluster
12:00 deniszh1 joined #gluster
12:01 kshlm Weekly community meeting starting now in #gluster-meeting
12:01 ppai joined #gluster
12:07 renm joined #gluster
12:10 DV__ joined #gluster
12:12 renm Hi all. Could I please get some advice on the recommended setup for a three node cluster. I am running an ovirt cluster and it seems to be running into high cpu on glusterfsd that greatly limits disk io. I have seen mention that having multiple bricks per host can help with this but I am not sure of how to setup replicate with bricks on the same system.
12:13 s19n renm, so you currently have one brick per node in replica 3?
12:14 renm s19n: Yes that is right.
12:15 hagarth renm: do you happen to know if self-heals are happening?
12:16 jtux joined #gluster
12:16 renm hagarth: self-heals seem to work fine - files will resync if there is anything missing
12:16 s19n renm: with two bricks per node you would end up having a similar setup, picture: https://joejulian.name/blog/how-to-expand-g​lusterfs-replicated-clusters-by-one-server/
12:17 s19n gah, sorry, this is replica 2, my bad.
12:17 hagarth renm: I have seen self-heals spike up cpu usage. was wondering if you were running into that.
12:18 hagarth renm: so more cpu cycles are being consumed in steady state (i.e. without any self-healing)?
12:20 renm hagarth: You are right as I have seen it spike up from self-heals as well but this seems to be from just a bit of disk io. Just doing a yum update on a vm is noticeably slower and that is without other disk load.
12:20 kanagaraj joined #gluster
12:21 kaushal_ joined #gluster
12:21 hagarth renm: the glusterfsd logs are worth a look into to see if there is anything unusual.
12:24 renm hagarth: I cannot see anything in the gluster logs that seems like errors. I think this is more that I am missing something on how it should be setup as performance is so much lower than local disk access.
12:24 hagarth renm: have you selected "optimize for virt" in ovirt after creating a gluster volume?
12:26 jwd joined #gluster
12:26 renm hagarth: I have and it does help until disk load increases. Then glusterfsd spikes and the disk speed drops.
12:28 jrm16020 joined #gluster
12:28 hagarth renm: might be worth a check to see what threads in glusterfsds are doing when you notice this .. strace -fp <pid> / gluster volume profile could help
12:29 renm hagarth: Good point - I will check with strace.
12:31 spalai joined #gluster
12:32 DV__ joined #gluster
12:34 spalai joined #gluster
12:39 spalai left #gluster
12:47 hchiramm joined #gluster
12:49 jwd joined #gluster
12:53 ppai joined #gluster
12:57 kotreshhr left #gluster
13:07 marbu joined #gluster
13:09 jcastill1 joined #gluster
13:15 gletessier joined #gluster
13:25 cleong joined #gluster
13:26 jcastillo joined #gluster
13:28 chirino joined #gluster
13:30 shyam joined #gluster
13:31 aaronott joined #gluster
13:36 mbukatov joined #gluster
13:41 overclk joined #gluster
13:44 bennyturns joined #gluster
13:45 theron joined #gluster
13:54 klaxa|work joined #gluster
13:57 chirino joined #gluster
13:58 harold joined #gluster
14:01 coredump joined #gluster
14:06 ws2k3 joined #gluster
14:06 vincent_vdk joined #gluster
14:08 nishanth joined #gluster
14:08 masterzen joined #gluster
14:12 jotun joined #gluster
14:12 yoda1410 joined #gluster
14:17 al joined #gluster
14:20 jonb1 joined #gluster
14:21 DV joined #gluster
14:22 dgandhi joined #gluster
14:23 cleong left #gluster
14:27 mpietersen joined #gluster
14:28 theron_ joined #gluster
14:35 coredump joined #gluster
14:35 togdon joined #gluster
14:48 nbalacha joined #gluster
14:56 sc0001 joined #gluster
14:57 kayn__ joined #gluster
14:58 the-me could someone please shoot the person who is writing everytime "received" in his knees?
14:58 the-me ...
14:58 the-me ehh now I am also confused after dozen of times fixing it... writing "recieved" instead of "received"
14:59 finknottle joined #gluster
15:00 finknottle Hi. I have a 3x2 distributed replicated setup. One of the nodes is misbehaving. I might have to reinstall the OS (RHS 3). I believe the data brick is in tact. If I reinstall the OS without touching the brick, what would be needed to get everything back to normal ?
15:01 haomaiwa_ joined #gluster
15:01 hagarth joined #gluster
15:02 overclk joined #gluster
15:03 haomaiwa_ joined #gluster
15:05 squizzi_ joined #gluster
15:06 theron joined #gluster
15:07 pseudonymous joined #gluster
15:07 Lee1092 joined #gluster
15:08 vimal joined #gluster
15:09 plarsen joined #gluster
15:10 kkeithley the-me: writing "received" in his knees?   huh?
15:10 jcastill1 joined #gluster
15:12 wehde joined #gluster
15:12 rwheeler joined #gluster
15:12 pseudonymous https://gist.github.com/ano​nymous/99f275920c44b67b9985 -- From this I gather my newly minted volume is a "replicated" volume. Am I correct in understanding that I need a bricks to be a multiple of the 'replica' value to get a 'distribted replicated volume' and.. in rough terms.. what is a distributed replicated volume ? A volume whose contents is replicated 'replica' times across multiple nodes where
15:13 pseudonymous no one node must be able to hold it all ?
15:13 kshlm joined #gluster
15:14 l0uis pseudonymous: in a replicated volume bricks must be a multiple of replica, yes
15:15 wehde Can someone give me an opinion on my gluster setup? I'm running a vm on top of a gluster volume and that vm is windows. I want to let the vm handle windows shares (because of active directory integration). Should i store all our data in the vm qcow2 files or should i setup samba shares directly to gluster?
15:15 theron joined #gluster
15:16 l0uis pseudonymous: distributed just means you have more nodes than replica count, basically. so the volumes files are dispersed on bricks but not on every brick. when # of bricks > replica count you hvae a distributed replicated volume
15:17 glusterbot News from resolvedglusterbugs: [Bug 1231040] gf_log_callingfn's output make me dizzy <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231040>
15:17 glusterbot News from resolvedglusterbugs: [Bug 1193893] [FEAT] Tool to find incremental changes from GlusterFS Volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1193893>
15:19 pseudonymous l0uis: great ! :) Thought I had understood it, but it is better to be sure.
15:19 skoduri joined #gluster
15:22 _Bryan_ joined #gluster
15:24 rwheeler joined #gluster
15:25 wehde is anyone using ctdb and samba?
15:27 jcastillo joined #gluster
15:44 jon__ should I do anything special if I want a volume that is dedicated to virtual machine images?
15:46 jon__ and should I put that replicated volume on top of raid?
15:51 cholcombe joined #gluster
15:56 ashiq joined #gluster
15:57 victori joined #gluster
15:58 ashiq joined #gluster
16:02 haomaiwa_ joined #gluster
16:06 Leildin wehde, we tried ctdb and samba, didn't resolve our lock issue
16:06 Leildin what's your problem ?
16:08 sankarshan_ joined #gluster
16:08 fyxim joined #gluster
16:11 glusterbot News from newglusterbugs: [Bug 1248123] writes to glusterfs folder are not synced to other nodes unless they are explicitly read from gluster mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1248123>
16:16 victori joined #gluster
16:24 calavera joined #gluster
16:24 samsaffron___ joined #gluster
16:31 billputer joined #gluster
16:31 smohan joined #gluster
16:35 ELCALOR joined #gluster
16:42 Peppard joined #gluster
16:49 mikemol_ joined #gluster
16:50 Rapture joined #gluster
17:01 victori joined #gluster
17:02 haomaiwa_ joined #gluster
17:05 sc0001 joined #gluster
17:06 overclk joined #gluster
17:24 jiffin joined #gluster
17:26 nsoffer joined #gluster
17:27 ashiq joined #gluster
17:28 togdon joined #gluster
17:37 timotheus1_ joined #gluster
17:37 calisto joined #gluster
17:37 jobewan joined #gluster
17:41 jcastillo joined #gluster
17:44 jiffin joined #gluster
17:47 glusterbot News from resolvedglusterbugs: [Bug 1144672] file locks are not released in frequently disconnects after apply BUG #1129787 patch <https://bugzilla.redhat.co​m/show_bug.cgi?id=1144672>
18:02 the-me kkeithley: would be a good idea.. :)
18:02 haomaiwa_ joined #gluster
18:03 kkeithley the-me:  you lost me
18:03 the-me kkeithley: ? :o
18:04 kkeithley what would be a good idea?
18:06 the-me to write on his knees "I will not write recieved again"
18:06 kkeithley ah
18:07 timotheus1__ joined #gluster
18:07 kkeithley who should do that, you?
18:07 the-me btw to fix the unix/fifo issue in stable: https://bugs.debian.org/cgi-​bin/bugreport.cgi?bug=794003
18:07 calavera joined #gluster
18:07 the-me if he is here in my house, yes please xD
18:08 the-me 3.7.3-1 is now also in debian sid (also with my whole open set of submitted bug reports)
18:08 kkeithley sweet
18:09 JoeJulian That seems like a pretty easy script to write. If a patch is submitted for review with the misspelling, simply -1 it with the explanation.
18:09 the-me https://guest:guest@svn.linux-dev.org/svn/pkg/glu​sterfs/trunk/debian/patches/ those apply against 3.7.3
18:11 JoeJulian NET::ERR_CERT_AUTHORITY_INVALID ... can that admin get a knee shot as well?
18:11 glusterbot News from newglusterbugs: [Bug 1234877] Samba crashes with 3.7.2 and VFS module <https://bugzilla.redhat.co​m/show_bug.cgi?id=1234877>
18:13 the-me JoeJulian: private server ;)
18:13 JoeJulian the-me: not willing to go through the development process to submit those? ,,(hack)
18:13 glusterbot the-me: The Development Work Flow is at http://www.gluster.org/community/documen​tation/index.php/Simplified_dev_workflow
18:14 the-me too much work
18:14 ramky joined #gluster
18:14 the-me I am maintaining a bunch of packages from different upstreams
18:15 JoeJulian Yet hunting down and shooting a poor little developer makes your agenda... ;)
18:16 the-me something more important: https://bugs.debian.org/cgi-bi​n/bugreport.cgi?bug=794003#10
18:16 kkeithley english is tuff stough. I cut people who may not be native speakers a bit of slack
18:17 the-me was there an important reason why the declaration had been changed from int to static int within the same commit?
18:18 the-me or could I safely revert both changes for just fixing the initial problem?
18:19 kkeithley probably no more important than they only have file scope anyway. Just truth-and-beauty.
18:19 kkeithley why does it matter? Just someone being super nitpicky?
18:20 JoeJulian bug 1244118
18:20 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1244118 medium, medium, ---, ndevos, MODIFIED , unix domain sockets on Gluster/NFS are created as fifo/pipe
18:20 mikemol_ So, I have a libgfapi (bareos-sd) client trying to open a file on a gluster volume. the libgfapi process is running as the bareos user, and "ls -l" in a fuse mount of the gluster volume shows everything in there is owned by bareos:bareos, with permissions of 0666.
18:20 the-me stable release update: just LOCs which solve the problem
18:20 shyam joined #gluster
18:21 mikemol_ And yet, I'm seeing this in my logs: 9-Jul 14:17 backup-director-sd JobId 918: Warning: mount.c:207 Open device "GlusterStorage4" (gluster://[redacted]/bareos/bareos) Volume "Email-Incremental-0155" failed: ERR=dev.c:616 Could not open: gluster://[redacted]/bareos/b​areos/Email-Incremental-0155, ERR=Permission denied
18:21 * mikemol_ doesn't know if that truncated or not. It ends with "ERR=Permission denied".
18:22 mikemol_ This is gluster 3.7.2, on Cent7, using the gluster repos.
18:23 mikemol_ What can I do to identify if there's something about the gluster volume I can kick into shape, or if this is a bug in the consumer of libgfapi?
18:24 mikemol_ (This all worked fine with gluster 3.7.something-before-2, before I tried to expand from one brick to two, hit a crash, updated to 3.7.2 expanded and rebalanced, and found myself here.)
18:25 kkeithley mikemol_: don't know off the top of my head, but if you're open to another update, 3.7.3 RPMs were put up earlier today. You might try them and see if the problem goes away
18:25 the-me JoeJulian: so http://nopaste.linux-dev.org/?664488 should be fine :)
18:26 the-me eh sorry I mean kkeithley .
18:26 mikemol_ kkeithley: I just ran yum update less than two hours ago. I'm guessing 3.7.3 hasn't hit those repos?
18:27 kkeithley Oh. I guess you're using the CentOS storage SIG repos.
18:27 kkeithley the-me: yes, should be
18:28 mikemol_ kkeithley: http://download.gluster.org/pub/gluster/glusterfs​/3.7/LATEST/EPEL.repo/epel-$releasever/$basearch/
18:29 kkeithley mikemol_: that was updated more than two hours ago, but yum has some funny (to me anyway) cache heuristics. A `yum clean all` and then update should make it see the new 3.7.3 RPMs
18:30 klaas joined #gluster
18:30 mikemol_ kkeithley: I'll try that. And if that still doesn't work, I'll see if one of my intermediate caching forward proxies is getting in the way.
18:32 mikemol_ Urgh. I hate errors like these: "http://download.gluster.org/pub/gluster/glust​erfs/3.7/LATEST/EPEL.repo/epel-7/noarch/repod​ata/e3834f3e903bd5c33539c2746dee027ab93ff38c5​29b95900c3e36315f7c4681-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found
18:36 coredump joined #gluster
18:37 kkeithley I just recreated the repo. I guess our guy who puts the repos up has a bug in his script
18:39 kkeithley yumdownloader from there just worked for me.
18:41 kkeithley s/recreated the repo/rebuilt the repo metadata files/
18:41 glusterbot What kkeithley meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
18:41 kkeithley glusterbot ftw
18:42 shaunm_ joined #gluster
18:42 kovshenin joined #gluster
18:46 arcolife joined #gluster
18:47 mikemol_ That URI still doesn't work for me. Including skipping all proxies. And yum check-update still fails. Wish I knew enough about yum to know where it came from.
18:47 mikemol_ trying yumdownloader.
18:48 the-me update for stable has been approved :=)
18:49 mikemol_ yumdownloader does not work for me; it went and grabbed 3.6.0 from base, rather than anything from @glusterfs-epel.
18:50 mikemol_ kkeithley: download.gluster.org resolves to 50.57.69.89 here, which has a PTR record pointing back to download.gluster.org. Same for you?
18:50 mikemol_ Wondering if we may be getting our content served to us by different nodes somewhere.
18:54 kayn__ joined #gluster
18:59 ipmango_ joined #gluster
19:01 theron_ joined #gluster
19:01 haomaiwang joined #gluster
19:02 kayn joined #gluster
19:03 JoeJulian mikemol_: you have a stale repomd.xml
19:04 JoeJulian dnf clear all
19:04 JoeJulian (or yum, whichever you're using)
19:04 mikemol_ JoeJulian: Yeah, just found that one of my proxies has the stale one cached.
19:06 ipmango joined #gluster
19:10 jwd joined #gluster
19:11 aravindavk joined #gluster
19:12 TheCthulhu joined #gluster
19:12 mikemol_ Grabbed update fine after routing through a different internal proxy. Something's really ugly with that one; passing Cache-Control headers wasn't enough.. Will have to dig into it later.
19:15 kayn joined #gluster
19:19 ipmango joined #gluster
19:20 ashiq- joined #gluster
19:21 kayn_ joined #gluster
19:39 Rapture joined #gluster
19:40 theron joined #gluster
19:46 calavera joined #gluster
19:49 smohan joined #gluster
19:50 JoeJulian Aha!
19:52 JoeJulian I finally tracked down the reason that s/foo/bar/ no longer works for glusterbot. Supybot added threading and there's a bug in EL6's forking code where it tries to flush a fd that's already closed.
19:52 JoeJulian I guess it's time to build a new vm.
19:59 theron_ joined #gluster
20:02 haomaiwang joined #gluster
20:13 jbrooks joined #gluster
20:15 doekia joined #gluster
20:18 coredump joined #gluster
20:21 kaushal_ joined #gluster
20:24 mikemol_ kkeithley: Well, the update to gluster 3.7.3 does not solve the problem. :-|
20:25 shyam left #gluster
20:28 DV joined #gluster
20:30 calavera joined #gluster
20:39 theron joined #gluster
20:46 purpleidea JoeJulian: bz url?
20:46 glusterbot joined #gluster
20:46 purpleidea s/bz/bugzilla/
20:46 glusterbot purpleidea: Error: I couldn't find a message matching that criteria in my history of 114 messages.
20:46 JoeJulian purpleidea: I haven't looked for one yet. Mailing list finds so far.
20:47 purpleidea JoeJulian: well if you have something i can confirm and point at, lmk maybe i can poke someone... in parallel, if you don't find one, create one!
20:47 JoeJulian This is broken.
20:47 JoeJulian s/broken/fixed/
20:47 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
20:47 JoeJulian fudge
20:47 purpleidea oh shit
20:48 purpleidea s/foo/bar/
20:48 glusterbot What purpleidea meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
20:48 csim ah ah
20:48 csim s/ah ah/An error has occurred and has been logged. Please contact this bot's administrator  for more information.
20:48 csim s/ah ah/An error has occurred and has been logged. Please contact this bot's administrator  for more information./
20:48 glusterbot What csim meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
20:48 csim see, it work
20:50 kayn_ joined #gluster
20:50 chirino joined #gluster
20:50 glusterbot joined #gluster
20:53 JoeJulian still not fixed as of 2.7.9 apparently
20:55 glusterbot` joined #gluster
20:55 glusterbot joined #gluster
20:56 glusterbot joined #gluster
20:56 glusterbot_ joined #gluster
20:57 glusterbot joined #gluster
20:58 glusterbot joined #gluster
20:58 shaunm_ joined #gluster
20:58 glusterb8t joined #gluster
20:59 glusterbot joined #gluster
21:00 JoeJulian Damned blahblah spam.
21:00 JoeJulian s/blahblah/ChanServ/
21:00 glusterbot What JoeJulian meant to say was: Damned ChanServ spam.
21:00 JoeJulian suck it, python.
21:02 64MADG6SP joined #gluster
21:04 JoeJulian Ok, now someone who cares needs to improve this list of regex to fix karma tagging. #54: (\S+)\+\+,#55: (\S+)\-\-,#56: (\S+): \+\+,#57: (\S+): \-\-
21:04 JoeJulian @mp show --id 54
21:04 glusterbot JoeJulian: The action for regexp trigger "(\S+)\+\+" is "$1++"
21:10 sage_ joined #gluster
21:20 badone joined #gluster
21:24 Leildin joined #gluster
21:28 ipmango_ joined #gluster
21:33 theron joined #gluster
21:36 JoeJulian @paste
21:36 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
21:37 the-me pnopaste-cli :)
21:38 JoeJulian I love that it doesn't require anything special.
21:39 shyam joined #gluster
21:39 shyam left #gluster
21:41 JoeJulian purpleidea: http://bugs.python.org/issue13444
21:41 the-me cat foo | nopaste-it
21:41 glusterbot Title: Issue 13444: closed stdout causes error on stderr when the interpreter unconditionally flushes on shutdown - Python tracker (at bugs.python.org)
21:42 nsoffer joined #gluster
21:42 the-me but you can use stuff like cat bla | nopaste-it -l C++, or Diff etc or set expires. :)
21:42 glusterbot the-me: C's karma is now 7
21:42 JoeJulian command not found
21:42 the-me apt-get install pnopaste-cli
21:42 JoeJulian apt-get: command not found
21:42 JoeJulian Don't forget, I support all distros.
21:42 the-me wrong OS xD
21:43 JoeJulian And all of them have nc.
21:43 the-me that's right
21:43 JoeJulian even osx
21:43 JoeJulian Windows doesn't, but who cares about windows..
21:44 plarsen joined #gluster
21:44 the-me did you ever tried to open a window in a submarine?
21:44 JoeJulian No, but my buddy used to work on them. He opened windows all the time.
21:49 the-me ok then: wget 'http://sourceforge.net/p/pnopaste/code/HEA​D/tree/trunk/bin/nopaste-it.pl?format=raw' -O /usr/local/bin/nopaste-it && chmod +x /usr/local/bin/nopaste-it
21:49 glusterbot Title: Perl Nopaste / Code / [r202] /trunk/bin/nopaste-it.pl (at sourceforge.net)
21:49 the-me :p
21:50 JoeJulian lol
21:51 JoeJulian I think I'll stick with termbin
21:51 JoeJulian Oh! It's perl! I'll definitely stick with termbin.
21:51 nzero joined #gluster
21:51 the-me I hate glusterfs for its python foo xD
21:52 JoeJulian Should we talk about ruby?
21:52 the-me there are more active used and incompatible (with each other) versions of python in use than movies of SAW
21:52 the-me should we better talk about HIV?
21:53 the-me ruby...
21:53 JoeJulian hehe
21:53 the-me what about a rewrite of glusterfs in e.g...
21:53 the-me hmmm
21:53 the-me node.js?
21:53 JoeJulian So we had this problem and installed a ruby package to solve it...
21:53 JoeJulian ... now we have two problems.
21:54 nzero easy, use jruby...
21:54 JoeJulian three...
21:54 the-me dafuq
21:56 JoeJulian I love whack and what he's done for logging, but damn.. jruby sure makes logstash a bitch to contribute to.
21:56 calavera joined #gluster
21:59 kovshenin joined #gluster
22:02 64MADG7ET joined #gluster
22:08 sc0001 hi
22:08 glusterbot sc0001: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:08 sc0001 it seems our cluster has started a self heal and the count of entries is going on increasing
22:09 JoeJulian Is that a bad thing?
22:09 sc0001 we have 3 TB cluster and we had to remove on node out of 6 which caused this
22:09 sc0001 the count is above 10000 per brick
22:10 nsoffer joined #gluster
22:11 JoeJulian It sounds like a possibility, especially if the number of files is increasing, that one or more of your clients cannot connect to that server. Check firewalls and other connectivity issues.
22:12 sc0001 the servers running at 80% of their cpu
22:13 sc0001 due to this the response from gluster is decreasing drstically
22:13 sc0001 due to this the response from gluster is decreasing drastically
22:13 sc0001 causing the requests to upload get a timeout
22:16 JoeJulian Wait... 10k files on all 6 bricks?
22:16 JoeJulian Is this replica 6?
22:17 sc0001 Number of entries: 8680
22:17 sc0001 Number of entries: 0
22:17 sc0001 Number of entries: 9148
22:17 sc0001 Number of entries: 9539
22:17 sc0001 Number of entries: 0
22:17 sc0001 Number of entries: 10659
22:17 sc0001 Number of entries: 9745
22:17 sc0001 Number of entries: 0
22:17 sc0001 Number of entries: 9844
22:17 sc0001 Number of entries: 0
22:18 sc0001 Number of entries: 0
22:18 sc0001 Number of entries: 0
22:18 sc0001 Number of entries: 0
22:18 sc0001 Number of entries: 0
22:18 sc0001 Number of entries: 0
22:18 sc0001 Number of entries: 0
22:18 sc0001 Number of entries: 0
22:18 sc0001 Number of entries: 0
22:18 sc0001 it is Distributed-Replicate with Number of Bricks: 6 x 3 = 18
22:19 sc0001 replica 3
22:19 JoeJulian don't flood
22:19 JoeJulian @paste
22:19 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
22:19 sc0001 srry
22:19 sc0001 thx
22:20 squizzi_ joined #gluster
22:23 kayn_ joined #gluster
22:29 sc0001 this has been the case for last 3 hours,
22:40 uebera|| joined #gluster
22:47 sc0001_ joined #gluster
22:54 JoeJulian sc0001_: Why are their 6 servers with heals pending instead of just two? There should only be two. The two replica that were up when the one was down. You've had a network issue.
22:55 calavera joined #gluster
23:00 theron_ joined #gluster
23:02 haomaiwa_ joined #gluster
23:02 wushudoin| joined #gluster
23:07 shyam joined #gluster
23:07 wushudoin| joined #gluster
23:13 wushudoin| joined #gluster
23:13 plarsen joined #gluster
23:13 _Bryan_ joined #gluster
23:14 ira joined #gluster
23:19 aaronott joined #gluster
23:22 sc0001_ yes previously we had a network issue and a split brain occurred and existed for a day, we found it and fixed the issue. Then the heal went for a day and today one of the server went bad and stopped
23:22 sc0001_ the split brain was a week back
23:31 nzero joined #gluster
23:32 Mr_Psmith joined #gluster
23:51 dijuremo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary