Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 pelox joined #gluster
00:10 Leildin joined #gluster
00:34 T3 joined #gluster
00:42 cornus_ammonis joined #gluster
00:52 SOLDIERz joined #gluster
00:55 gildub joined #gluster
01:00 Thexa4 I figured out I can change the replica cound. Hard to find in the documentation though
01:04 Thexa4 left #gluster
01:08 sputnik13 joined #gluster
01:17 bala joined #gluster
01:18 elico left #gluster
01:23 sputnik13 joined #gluster
01:25 Intensity joined #gluster
01:32 sputnik13 joined #gluster
01:41 harish joined #gluster
01:51 nitro3v joined #gluster
01:53 SOLDIERz joined #gluster
01:59 nitro3v joined #gluster
02:11 theron joined #gluster
02:13 nangthang joined #gluster
02:36 wkf joined #gluster
02:39 hagarth joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:53 kshlm joined #gluster
02:54 SOLDIERz joined #gluster
03:28 hagarth joined #gluster
03:45 nbalacha joined #gluster
03:54 itisravi joined #gluster
03:54 SOLDIERz joined #gluster
03:56 bharata-rao joined #gluster
04:03 shubhendu joined #gluster
04:08 nitro3v joined #gluster
04:14 meghanam joined #gluster
04:23 RameshN joined #gluster
04:24 ndarshan joined #gluster
04:31 badone_ joined #gluster
04:31 spandit joined #gluster
04:31 RameshN joined #gluster
04:32 wkf joined #gluster
04:36 anoopcs joined #gluster
04:40 atinmu joined #gluster
04:47 yoavz joined #gluster
04:48 jiffin joined #gluster
04:51 soumya joined #gluster
04:54 jiffin1 joined #gluster
04:56 jiffin joined #gluster
04:59 jiffin1 joined #gluster
05:05 ppai joined #gluster
05:15 Apeksha joined #gluster
05:17 Manikandan joined #gluster
05:17 Manikandan_ joined #gluster
05:20 schandra joined #gluster
05:23 prasanth_ joined #gluster
05:27 Bhaskarakiran joined #gluster
05:29 vimal joined #gluster
05:31 kdhananjay joined #gluster
05:32 rjoseph joined #gluster
05:34 bala joined #gluster
05:38 smohan joined #gluster
05:39 hagarth joined #gluster
05:46 raghu joined #gluster
05:48 overclk joined #gluster
05:51 ramteid joined #gluster
05:58 sputnik13 joined #gluster
06:00 R0ok_ joined #gluster
06:00 dusmant joined #gluster
06:02 sputnik13 joined #gluster
06:05 sputnik1_ joined #gluster
06:12 deepakcs joined #gluster
06:36 Bhaskarakiran joined #gluster
06:44 anrao joined #gluster
06:46 bala joined #gluster
07:03 mbukatov joined #gluster
07:07 Philambdo joined #gluster
07:12 hagarth joined #gluster
07:19 jtux joined #gluster
07:19 bala joined #gluster
07:21 lalatenduM joined #gluster
07:32 soumya joined #gluster
07:37 dev-zero joined #gluster
07:37 dev-zero hi everyone
07:38 dev-zero has someone an idea about this behaviour: we have a gluster volume mounted via fuse and VM disk images on it. Now we continuously see them self-healed and the I/O is rather high
07:40 dev-zero that volume was freshly created two days ago
07:42 [Enrico] joined #gluster
07:46 victori joined #gluster
07:59 SOLDIERz joined #gluster
07:59 atinmu dev-zero, Are your bricks getting down and coming back?
08:00 atinmu dev-zero, itisravi can answer this better though
08:07 dev-zero atinmu: no, they are not
08:09 dev-zero maybe this is important: the server and the client run on the same machines (but we do not write directly to the bricks of course, only via the fuse mount)
08:27 andreask joined #gluster
08:33 fsimonce joined #gluster
08:35 ndarshan joined #gluster
08:37 dev-zero itisravi: ping? ^^
08:37 nshaikh joined #gluster
08:37 Bhaskarakiran joined #gluster
08:38 itisravi dev-zero: Does gluster vol heal info show anything?
08:42 kovshenin joined #gluster
08:46 paraenggu joined #gluster
08:50 hagarth joined #gluster
08:54 liquidat joined #gluster
08:58 deniszh joined #gluster
09:07 [Enrico] joined #gluster
09:07 dev-zero itisravi: no, "Number of entries: 0"
09:12 dev-zero itisravi: it is only the gluster vol heal info healed command which shows all of the images every 10 minutes
09:13 itisravi dev-zero: 'info healed' is depricated..It shows prior heals. What version of gluster are you running?
09:14 dev-zero itisravi: v3.5.3-26-g526448e plus a backport of http://review.gluster.org/7857
09:15 dev-zero itisravi: what do you mean by "prior heals" ?
09:15 RaSTar joined #gluster
09:15 dev-zero itisravi: why do we get new entries all 10 minutes ?
09:17 itisravi dev-zero: I meant 'gluster vol heal volname info healed` will show the history of of heals that might have happened earlier.
09:18 itisravi dev-zero: I thought the command was disabled in v3.5. But maybe I'm wrong.
09:18 Slashman joined #gluster
09:19 itisravi dev-zero: As long as "gluster vol heal volname info" shows zero entries, you can be sure that self-heal daemon doesn't have to do any heals.
09:20 dev-zero itisravi: ok, but why does it keep adding our files to the list of files healed every 10 minutes?
09:22 glusterbot News from newglusterbugs: [Bug 762184] Support mandatory locking in glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=762184>
09:22 itisravi dev-zero: Not too sure.
09:25 aravindavk joined #gluster
09:27 kumar joined #gluster
09:28 dev-zero itisravi: and in the glfsheal log of the volume we see the following entries:
09:28 atinmu joined #gluster
09:32 itisravi dev-zero: The glfsheal log is populated when you run 'gluster vol heal volname info' and not when the heal actually happens. For the logs related to actual healing, you need to check the glustershd.log
09:33 dev-zero itisravi: ah, so every time we run 'gluster vol heal volname info' the client connects to the volumes and pulls the info?
09:34 itisravi dev-zero: sorta. when you run the command, the 'glfsheal' binary is run with tghe volname as the argument. The glfsheal binary is a client process in the sense that it loads the client side xlators.
09:37 Pupeno joined #gluster
09:37 Pupeno joined #gluster
09:37 monotek1 joined #gluster
09:42 dev-zero itisravi: ok, thanks
09:43 dev-zero itisravi: the basic issue is that we have poor performance and high I/O wait
09:43 dev-zero itisravi: the backend storage can easily pack 200MB/s, the interconnect is a 10GbE ethernet
09:45 dev-zero itisravi: our only clue so far was the list of images in "gluster vol heal volname info healed" growing
09:45 itisravi dev-zero: Are the time stamps of the file in info healed different?
09:45 itisravi and increasing?
09:45 dev-zero yes
09:46 itisravi dev-zero: I guess that means there are frequent network disconnects and the glustershd is always healing something.
09:47 itisravi After the heal completes, it no longer shows up in 'info'but appears in 'info healed'
09:48 itisravi dev-zero: FWIW, you could set the "cluster.data-self-heal-algorithm" to "full"
09:48 itisravi That would improve the performance in VM workloads.
09:49 dev-zero itisravi: https://bpaste.net/show/d04cda736de1
09:50 itisravi dev-zero: okay.
09:51 ndarshan joined #gluster
09:52 glusterbot News from newglusterbugs: [Bug 1188184] Tracker bug :  NFS-Ganesha new features support for  3.7. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1188184>
09:52 glusterbot News from newglusterbugs: [Bug 1197631] glusterd crashed after peer probe <https://bugzilla.redhat.co​m/show_bug.cgi?id=1197631>
09:52 glusterbot News from newglusterbugs: [Bug 1194640] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194640>
09:54 hagarth joined #gluster
09:56 nitro3v joined #gluster
10:04 gildub joined #gluster
10:10 social joined #gluster
10:11 ppai joined #gluster
10:16 ira_ joined #gluster
10:44 harish joined #gluster
10:56 firemanxbr joined #gluster
11:00 Norky joined #gluster
11:03 dev-zero hmm, we have no network disconnects but the glustersh still keeps adding files to the list of healed files
11:03 dev-zero anyone else any idea?
11:05 ndarshan joined #gluster
11:08 kanagaraj joined #gluster
11:10 SOLDIERz joined #gluster
11:14 ndarshan joined #gluster
11:15 rsf joined #gluster
11:24 gildub joined #gluster
11:34 rsf joined #gluster
11:49 lifeofguenter joined #gluster
11:50 ekuric joined #gluster
11:54 tigert joined #gluster
12:00 ppai joined #gluster
12:03 meghanam joined #gluster
12:09 o5k_ joined #gluster
12:11 o5k_ left #gluster
12:16 [Enrico] joined #gluster
12:17 jiku joined #gluster
12:34 SOLDIERz joined #gluster
12:39 misc mhh seems we have a issue with slave24 and 25
12:41 misc no ssh, but salt-minion work
12:42 diegows joined #gluster
12:45 misc ok, so nope, salt do not work
12:49 ppai joined #gluster
12:49 lalatenduM joined #gluster
12:54 o5k joined #gluster
12:54 hagarth joined #gluster
12:55 LebedevRI joined #gluster
13:07 bfoster joined #gluster
13:09 rwheeler joined #gluster
13:19 ppai joined #gluster
13:20 chirino joined #gluster
13:27 ctria joined #gluster
13:29 misc # salt -t 60 slave25.cloud.gluster.org file.access /etc/passwd r
13:29 misc slave25.cloud.gluster.org: False
13:29 misc yeah
13:30 maveric_amitc_ if I have set the cluster.min-free-disk on a volume and all the bricks have crossed the limit, will I still be able to create empty files and folder in it.. .?
13:34 B21956 joined #gluster
13:34 B21956 left #gluster
13:42 kshlm joined #gluster
13:45 T3 joined #gluster
13:45 SOLDIERz joined #gluster
13:48 rsf left #gluster
13:49 T0aD joined #gluster
13:49 new_gfs joined #gluster
13:49 julim joined #gluster
13:50 B21956 joined #gluster
13:51 ctria joined #gluster
13:55 skroz joined #gluster
13:55 raghu left #gluster
13:56 wkf joined #gluster
13:59 nshaikh joined #gluster
14:01 shubhendu joined #gluster
14:06 soumya joined #gluster
14:09 SOLDIERz joined #gluster
14:10 overclk joined #gluster
14:16 skroz I've started seeing these in my logs every three seconds:
14:16 skroz [2015-03-02 14:15:10.510732] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/cf4f1b7e6f12ef9266e2856ce7c75b92.socket failed (Invalid argument)
14:16 plarsen joined #gluster
14:17 skroz should I be concerned?
14:18 skroz that socket exists, but I see no associated open fd
14:18 SOLDIERz joined #gluster
14:19 T3 joined #gluster
14:19 tigert joined #gluster
14:20 dgandhi joined #gluster
14:21 dgandhi joined #gluster
14:23 dgandhi joined #gluster
14:24 harish joined #gluster
14:24 dgandhi joined #gluster
14:24 theron joined #gluster
14:31 georgeh-LT2 joined #gluster
14:37 semiosis maveric_amitc_: i'm around
14:41 squizzi joined #gluster
14:42 mator skroz, https://bugzilla.redhat.com/show_bug.cgi?id=847821
14:42 glusterbot Bug 847821: low, medium, ---, bugs, NEW , After disabling NFS the message "0-transport: disconnecting now" keeps appearing in the logs
14:50 misc ndevos: nice finding
14:50 ndevos misc: did you send patches for glusterfs yet?
14:51 misc ndevos: nope, never
14:52 ndevos misc: do you want to?
14:53 misc ndevos: it seems like a trap :)
14:53 nitro3v joined #gluster
14:53 ndevos misc: one day I might send a patch for your salt config?
14:54 misc but in this case, I would surely be able to follow the process, but I am not sure to understand what the test is doing
14:54 misc ndevos: ok, so how do I proceed :)
14:55 ndevos misc: get an account on review.gluster.org
14:56 ndevos misc: after uploading a public ssh key, clone the repo from ssh://misc@git.gluster.org/glusterfs
14:56 * misc log with fedora
14:57 ndevos misc: after that, start a new branch for the issue you want to fix, like: git checkout -t -b tests/dont-mv-passwd origin/master
14:57 ndevos make the change, write a decent commit message and add your Singed-off-by
14:58 misc mhh so we have a Contributor agreement ?
14:58 ndevos no, we dont
14:58 misc well, gerrit tell me to look at it, or to just ignore it
14:58 misc as I am a rebel, I ignored it :)
14:58 ndevos :)
14:58 ndevos when you are happy with your commit, run ./rfc.sh to post the patch for review
14:59 ndevos it will ask you for a bug, you can use 1163543 for test case fixes
14:59 misc JustinClift: remind me, why the git from rpmforge ?
15:00 ndevos misc, JustinClift: I would expect any git would do?
15:00 nangthang joined #gluster
15:00 lalatenduM joined #gluster
15:00 misc ndevos: yeah, I would too
15:00 ndevos misc: and for the change, just create a temporary file in /tmp. use that instead of /etc/passwd and delete the file afterwards
15:03 bene2 joined #gluster
15:07 vstokes_ joined #gluster
15:08 vstokes_ server crashed and had to reload the OS. How can I import an existing volume? When I try 'volume create' it says that an existing volume already exists.
15:08 JustinClift misc ndevos ?
15:08 JustinClift Ahhh
15:09 JustinClift Yeah, so the version of git that comes with standard CentOS 6 is too old
15:09 JustinClift It doesn't support one of the options we need when doing git checkouts
15:10 JustinClift Thus the rpmforge version of git seemed like the best approach to getting a version that does
15:10 JustinClift misc: If you know of a better way to get a later version than supplied with CentOS 6, then by all means go for it :)
15:11 ndevos misc: oh, and as commit subject, you would use something similar to "tests: prevent deleting /etc/passwd"
15:12 misc JustinClift: mhh, what option ?
15:12 misc even if it might be too late to ask for a backport :/
15:13 mator vstokes_, start from reading admin guide ?
15:16 vstokes_ Yes, I looked at the admin guides. I don't see anything on how to import an existing volume. Unless it's called something else.
15:19 misc Running coding guidelines check ...
15:19 misc Must be run from the top-level dir. of a GlusterFS tree
15:19 misc mhhhh
15:23 mator vstokes_, well, usually you don't need anything to import for volume to work, did you made some kind of glusterfs upgrade probably ?
15:31 vstokes_ No. The guster was already updated to 3.6.  The OS had to be reloaded and we reinstalled gluster-server. But I can't import/create my volumes.
15:31 nmbr joined #gluster
15:32 _Bryan_ joined #gluster
15:34 theron joined #gluster
15:38 JustinClift vstokes_: Hmmm, that sounds like could be tricky
15:38 JustinClift vstokes_: Any chance you backed up the GlusterFS config first?
15:39 semiosis vstokes_: path or a prefix of it
15:39 glusterbot semiosis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
15:39 JustinClift semiosis: Cool :)
15:46 meghanam joined #gluster
15:46 JustinClift misc: I'm trying to remember which git option...
15:49 ira joined #gluster
15:49 JustinClift misc: Am I ok to rebuild those two slave VM's, or is this a good time to test out your Ansible building VM stuff? :)
15:49 misc JustinClift: you can rebuild, I didn't finish yet the setup
15:50 misc JustinClift: but if you give me your ssh key, I can add you in slat-master.gluster.org to approve new servers
15:50 misc for now, I cover
15:50 bala joined #gluster
15:50 misc http://www.gluster.org/community/doc​umentation/index.php/Jenkins_setup#I​nstall_additional_required_packages everything up to "create the jenkins user"
15:50 ira joined #gluster
15:51 bene3 joined #gluster
15:51 JustinClift misc: Sure, I'll email it to you in a min
15:53 JustinClift misc: Its in your inbox
15:53 misc JustinClift: will do after meeting
15:53 misc ( or rather during meeting )
15:59 luis_silva joined #gluster
16:02 luis_silva1 joined #gluster
16:06 sputnik13 joined #gluster
16:09 misc JustinClift: in fact, your key is already here
16:10 misc JustinClift: http://www.gluster.org/community/doc​umentation/index.php/Saltstack_infra adding a server to the pool
16:16 dev-zero ndevos: ping?
16:17 deepakcs joined #gluster
16:17 dev-zero ndevos: do you happen to know whether it is possible to get paid support from you/RedHat for glusterfs only?
16:18 misc I am not in sales, but I think RH usually do support only on the product
16:18 misc but I aslo guess this is also a question of money :)
16:18 kodokuu joined #gluster
16:18 misc ( we have a list of glster consultant on the web, iirc )
16:19 kodokuu Hi , I try to install glusterfs on Rhel 7.1 but first start ==> errors  http://pastebin.com/sJhxt3ES
16:19 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:21 johnmark joined #gluster
16:21 kodokuu http://fpaste.org/192267/
16:22 dev-zero misc: yes, I checked them out. There is one which does something similar to what we're doing and may have experience in that field. On the other hand it is always better to get support where most of the development happens as well
16:24 misc dev-zero: yeah, I can see that
16:24 misc dev-zero: I am in not in RH office at the momet, but I sit near the local boss of consulting, I would have asked otherwise
16:25 dev-zero misc: do you mind relaying the contact data (pm) so I could ask him?
16:27 misc dev-zero: I do already ask to a consultant
16:27 misc dev-zero: now, he is in France, not sure what part of the world you are in
16:27 dev-zero misc: Switzerland, not that far away :)
16:27 misc dev-zero: indeed
16:28 JustinClift dev-zero: Just to be sure... you're aware of the Red Hat Storage product yeah?
16:28 * JustinClift kinds of expects you'll be pointed towards that by Red Hat :)
16:29 dev-zero JustinClift: yeah, we know
16:29 JustinClift No worries :)
16:29 dev-zero JustinClift: but we use Gentoo usually and built our own "solution"
16:29 JustinClift Gotcha
16:30 misc I think enovance do this kind of consulting for openstack
16:30 misc and I know we have send some coders to big customers for meeting, and this kind of stuff
16:31 misc but doing software consulting is a double edged sword, as we do not have ressources to support all types of stuff and because we also ideally want to let some part of the market to others folks
16:32 misc since otherwise, no one external work on the product full time if they cannot somehow make a living
16:33 misc ( or atleast, taht's what i would do if i was CEO )
16:36 bennyturns joined #gluster
16:37 dev-zero misc: eNovance belongs to RedHat as well :)
16:38 dev-zero misc: but yes, I agree
16:39 misc dev-zero: yep, but they have a different business model
16:39 misc because to them, openstack was seen as "not ready", so they switched toa hybrid consulting coding way, which I did found quite interesting
16:40 dev-zero that was also the reason why we started to work on our own stack couple of years ago
16:41 misc we would surely welcome gentoo builder :)
16:41 wushudoin joined #gluster
16:46 T3 joined #gluster
16:49 vipulnayyar joined #gluster
16:53 T3 joined #gluster
16:56 coredump joined #gluster
16:56 dev-zero misc: what do you mean?
16:56 dev-zero misc: there is already a glusterfs package in Gentoo
16:57 dev-zero misc: occasionally maintained by myself ;-)
16:58 misc dev-zero: looking for a jenkins builder
16:58 misc so we can be sure it doesn't break on gentoo
16:58 luis_silva joined #gluster
16:58 misc even if the concept of "on gentoo" is quite nebulous, since this depend on use flags
17:02 jobewan joined #gluster
17:05 diegows joined #gluster
17:06 social joined #gluster
17:07 PeterA joined #gluster
17:10 julim joined #gluster
17:18 vipulnayyar joined #gluster
17:23 JustinClift misc: Do you remember if review.gluster.org is normall in enforcing mode or not for SELinux?
17:23 glusterbot News from newglusterbugs: [Bug 1194559] Make compatible version of python bindings in  libgfapi-python compared to libgfapi C apis. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194559>
17:24 * JustinClift ran system-config-securitylevel to adjust a firewall port... and now the system is not in enforcing mode
17:24 JustinClift But I don't remember if it was enforceing or not before
17:24 JustinClift :/
17:27 sputnik13 joined #gluster
17:27 firemanxbr JustinClift, [root@gerrit ~]# getenforce
17:27 firemanxbr Permissive
17:28 firemanxbr in my Gerrit 2.10
17:28 social joined #gluster
17:28 firemanxbr this option is configured with permissive.
17:29 dev-zero misc: the USE flags you are interested in correspond to the configure-flags
17:31 misc dev-zero: well, from a builder point of view, having all options permit to see what fail to build. at the same time, ahving less common  build flags might show bugs
17:32 dev-zero misc: sure, but figuring those out is usually "our" (downstream distro devs) job
17:33 dev-zero fixing build system bugs is really generic and we usually have enough knowledge for that around and the only thing we appreciate is when upstream listens when we submit patches or make recommendations
17:34 dev-zero ... which in the case of glusterfs was not much of a problem so far
17:36 dev-zero so, while a Gentoo builder may be worthwile I doubt you will get any improvement out of it we couldn't provide already
17:36 dev-zero so, please focus completely on the app itself because that's where we usually can not help
17:42 victori joined #gluster
17:46 JustinClift Hmmm, is anyone else having trouble getting through to the review.gluster.org webUI?
17:46 * JustinClift suspect's he's locked out the www site
17:47 firemanxbr JustinClift, for me don't access
17:47 JustinClift Ugh
17:47 JustinClift Looking at the firewall rules... I can't see how it's supposed to get access anyway
17:48 firemanxbr JustinClift, but ping is alive, I believe this it http revert proxy: (110) Connection timed out
17:48 JustinClift Could be
17:49 JustinClift This is the backup of the iptables config before I made the a tweak to the ssh ports:
17:49 JustinClift http://fpaste.org/192305/14253185/
17:49 JustinClift I'm not seeing anything in there for allowing http?
17:50 firemanxbr JustinClift, please open 80 and 443 port
17:50 JustinClift Yeah, I'll try that
17:50 * JustinClift doesn't understand how it was working before then ;)
17:50 firemanxbr JustinClift, I creating... this a config here.
17:51 JustinClift ... and now it's working again
17:51 firemanxbr JustinClift, http://ur1.ca/ju2g6
17:51 JustinClift firemanxbr: I've just added 80 and 443
17:52 firemanxbr JustinClift, +29418 (gerrit ssh port :))
17:52 firemanxbr JustinClift, yeh, Web Gui is ALIVE :D
17:52 JustinClift I'm using system-config-securitylevel-tui, because that's apparently what people on this box use (according to the iptables comments)
17:52 * firemanxbr yesssssss
17:52 JustinClift firemanxbr: Still need 29418?
17:52 firemanxbr JustinClift, yes, for push
17:53 JustinClift k, 1 min
17:53 firemanxbr JustinClift, Ok :D
17:53 JustinClift tcp only?
17:54 firemanxbr julim, yep
17:54 * firemanxbr ops
17:54 jmarley joined #gluster
17:54 firemanxbr JustinClift, yep TCP
17:54 JustinClift Cool
17:54 JustinClift firemanxbr: This is what it is now: http://fpaste.org/192311/42531888/
17:55 julim hi firemanxbr. what's up?  not sure who this is...
17:55 JustinClift julim: He meant me, but tab completed wrong and got you :)
17:55 firemanxbr julim, sorry, wrong nick :)
17:55 lalatenduM joined #gluster
17:55 julim no worries firemanxbr @JustinClift
17:56 firemanxbr JustinClift, show, but about push for gerrit, you use: 29418 or 22 ?
17:57 JustinClift 22
17:57 firemanxbr JustinClift, for default configuration, Gerrit use 29418, very simple and without script kidies :D
17:57 * JustinClift nods
17:57 JustinClift Everything is set up for 22 on the existing host though
17:57 JustinClift Can I turn off 29418 then, or should we leave it still open?
17:58 JustinClift (might be easier for migration if we leave it still open?)
17:58 firemanxbr JustinClift, okay, no problem, you can close 29148, this is not necessary
17:58 JustinClift :)
17:59 JustinClift Done
17:59 firemanxbr JustinClift, I hope to have news this week, I studied the whole weekend to have a very complete environment.
18:02 JustinClift Cool. :)
18:02 JustinClift I was busy breaking my 3D printer on the weekend :(
18:03 firemanxbr JustinClift, lol :D
18:12 Rapture joined #gluster
18:13 jackdpeterson Hey all, looking to see what I need to do to get my test instance of gluster (replica 3, cluster.quorum-count: 2, cluster.quorum-type: fixed. 3 nodes. one failed out (testing), things okay. Failed the second node out (simulate zone outage), brought both back in ... getting  Read-only file system (30) rsync error: error in file IO (code 11) at receiver.c(389) [receiver=3.1.0]
18:13 jackdpeterson it's been like that now over the weekend and so forth, so the auto healing doesn't appear to be doing its magic.
18:15 jackdpeterson mount params: fuse.glusterfs rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072 0 0
18:19 jiku joined #gluster
18:26 papamoose joined #gluster
18:26 jackdpeterson Regarding the above, I see https://www.mail-archive.com/​users@ovirt.org/msg23611.html -- with the volume reset for cluster.quorum-type -- appears to get the volume writable ... but why isn't this an automatic event that's part of self-heal?
18:29 frakt joined #gluster
18:29 JoeJulian jackdpeterson: I would look in the logs for clues. Would likely be at the timestamp where the cluster regained quorum.
18:31 jackdpeterson @JoeJulian, hmm okay. nothing interesting so far. I'll re-produce this failure again to see if I can get anything useful today
18:33 jiku joined #gluster
18:35 victori_ joined #gluster
18:42 wushudoin joined #gluster
18:46 deniszh joined #gluster
18:56 SOLDIERz joined #gluster
18:58 T0aD joined #gluster
19:04 ghenry joined #gluster
19:04 ghenry joined #gluster
19:08 hagarth joined #gluster
19:09 rwheeler joined #gluster
19:14 frakt joined #gluster
19:21 kminooie joined #gluster
19:23 kminooie happy Monday everyone
19:24 JoeJulian howdy
19:24 kminooie hey joe
19:24 kminooie I was checking the log files and I saw this warning: 0-management: op_ctx modification failed
19:25 aravindavk joined #gluster
19:25 kminooie what does that mean?
19:25 kminooie I get this when I try to run the heal command ( heal fails )
19:27 luis_silva joined #gluster
19:29 nobody18288181 joined #gluster
19:29 nobody18288181 is it safe to mount gluster on multiple machines and do writes on those mounts at the same time?
19:30 kminooie nobody18288181: yes that is kinda the whole point of it
19:30 nobody18288181 kminooie: just want to make sure as I know you cant mount RDB in ceph on multiple servers without corrupting data
19:32 JoeJulian nobody18288181: Only issue with that is that your software should use posix locks to prevent simultaneous writes. The same as if two apps on the same computer were accessing the same file.
19:36 JustinClift misc: You around
19:36 JustinClift ?
19:36 JustinClift misc: Need help with firewall on review.gluster.org
19:36 JustinClift I can't get a port mapping from port 29418 to 22 working
19:36 JustinClift No idea why
19:38 lalatenduM joined #gluster
19:39 misc JustinClift: use xinetd
19:42 JustinClift ?
19:43 misc well, i ma in the train :)
19:43 JustinClift Heh
19:43 misc let me 20 minutes until I get enough network
19:43 JustinClift misc: Looking at the xinet.d files though, it's not a bad ideea
19:44 JustinClift Whoever first set this up used /etc/rc.local to set the port mapping rule
19:45 misc hurf
19:46 JustinClift And that + our /etc/sysconfig/iptables config... they don't play nice together
19:46 JustinClift misc: If I stop the iptables service, and just leave in the manual port map from /etc/rc.local, it works
19:46 JustinClift But the second iptables service starts... gerrit ssh stops
19:47 JustinClift misc: Probably need to merge the manual rule into sysconfig/iptables
19:47 JustinClift But I have no idea how :/
19:48 misc JustinClift: well, i can take a look at that too
19:48 misc but I would rather do taht with a working network
19:48 JustinClift Understood
19:49 JustinClift I think we can wait the 20 mins until you have enough network
19:49 misc (not on small island of connexion I get in tunnel )
19:51 misc http://fpaste.org/192378/25325895/ so that's the snippet I used in puppet for port redirection
19:52 misc ( that's erb template, I guess that's quite trnasparent on what to )
19:52 firemanxbr humm, good option :]
19:53 misc much simpler than to fiddle with iptables
19:53 misc and less risky :)
19:53 misc ( downside, all connexion appear to come from 127.0.0.1, which can break fail2ban and stuff like this )
20:00 misc JustinClift: I can't connect to review on port 21
20:01 misc I hope that's not my isp playing games
20:02 mikemol So, we have a somewhat predictable memory leak on a gluster mount. ~4GB over 42 days. The leak isn't as bad or as fast as it had been in previous versions.
20:02 JustinClift misc: 1 sec, I'll check from here
20:02 JustinClift misc: I can still get in remotely
20:02 mikemol I know the gluster devs are always looking for a way to identify and squash leaks. So since we know this leak is going to come back after cycling the host, I'd like to know what to instrument and monitor to help the devs identify the cause...
20:02 JustinClift misc: Can you ssh into say www.gluster.org, and go from there
20:02 JustinClift semiosis: ^ (2 lines up)
20:03 misc JustinClift: yeah, I am doing a ssh trick
20:03 JustinClift I've tried merging the mapping rule into /etc/sysconfig/iptables
20:03 mikemol (Currently using glusterfs-fuse 3.6.2-1.el6 out of the glusterfs-epel repo on CentOS 6.)
20:04 JustinClift misc: Lets move our conversation to #gluster-dev (the dev channel) :)
20:04 kminooie so when one upgrades gluster, how does that affect the internal configurations? or why am I seeing these line in my brick log files:  http://fpaste.org/192384/14253266/
20:18 semiosis JustinClift: ???
20:18 JustinClift semiosis: mikemol's question a few lines back
20:20 semiosis JustinClift: slabtop?
20:20 semiosis idk, i'm not a gluster dev :)
20:22 JustinClift kkeithley: ^ :)
20:22 semiosis mikemol: are you sure it's gluster that is leaking memory, not some app using gluster?  do you know which process is consuming the memory?  sounds like your client mount
20:23 mikemol semiosis: gluster fuse, I believe. This was something we identified months ago, but a software update to gluster mostly dealt with the problem. Now, instead of needing a reboot after a few days, it can last a couple months.
20:25 mikemol semiosis: Eh. I'll monitor the memory usage of glusterfs and glusterd over the next month and verify which process is consuming the memory.
20:26 JustinClift mikemol: Thanks
20:26 JustinClift Now I think about it, I think Kaleb was interested in this memory leak stuff ( kkeithley here in IRC )
20:26 mikemol It's not a super-high priority for us, given the leak doesn't take a node down in under a week any more. I'd bet a week's wages it's in a gluster process.
20:27 mikemol Just looking for advice on what I can instrument or monitor to help pinpoint where in gluster the problem is (assuming it's in gluster).
20:27 DV joined #gluster
20:28 hchiramm__ joined #gluster
20:31 JoeJulian mikemol: Yep, there's at least one gluster memory leak that nobody's been able to find. Pranith has brought up the idea of changing their memory allocation model to mimic that of the kernel, but that never grew any legs. In the mean time, the translators allocate memory from a shared pool. There's no way to track which translator allocated the memory so, short of just disabling translators one-by-one and seeing if the problem goes away, there's
20:31 JoeJulian no way to isolate the leak.
20:32 lpabon joined #gluster
20:33 mikemol JoeJulian: Well, that's something. I can conceivably spin up a few more of these nodes, remove one translator from each, and watch to see which one has lost the most memory a month later.
20:34 mikemol the nodes in question are our httpd instances, and they sit behind a load balancer. So it shouldn't be too big a deal, performance-wise.
20:35 mikemol (unless the translators have to be enabled/disabled volume wide, in which case--ech.)
20:35 glusterbot mikemol: case's karma is now -1
20:35 mikemol glusterbot: case++ did nothing wrong.
20:35 rotbeard joined #gluster
20:35 semiosis mikemol: normally they are set volume wide, but you can get crafty with volfiles for your experiment
20:35 JustinClift mikemol: So, please do. :)
20:36 mikemol JustinClift: I'll bring it up internally and see what we can do. We're getting tight on vm host capacity, but that difficulty may be going away soon.
20:36 JoeJulian Of course, that may also lead to false positives if something in one translator leads to leaks in another. I don't know enough to know if that's a possibility.
20:36 JustinClift One way to find out :)
20:43 skroz joined #gluster
20:56 jiku joined #gluster
21:05 coredump joined #gluster
21:08 prg3 joined #gluster
21:09 TealS joined #gluster
21:29 theron_ joined #gluster
21:54 jackdpeterson @purpleidea - looking to add nobootwait to the gluster::brick params (/etc/fstab output). is this something that is currently supported?
21:59 semiosis ^^ should be added by default to glusterfs mounts in fstab on ubuntu
21:59 semiosis i can't imagine any reason why you would not want nobootwait
21:59 semiosis purpleidea: ^
22:08 jackdpeterson @semiosis -- UUID=1196e791-9ceb-4ae8-bbb4-cc12c4c7f5a7       /mnt/storage1j  xfs     defaults,inode64,nobarrier,r​w,noatime,nodiratime,noexec 0       2  ... Line 376/511 of  brick.pp $mount_options seems to indicate that it isn't in there
22:09 semiosis i believe you
22:09 jackdpeterson (I noticed this while simulating entire brick failure (power-off instance, detach EBS volume, attach new blank EBS volume) ... no bootie uppie :-\ lol
22:14 jackdpeterson I'll submit a pull request
22:15 victori joined #gluster
22:18 xiu joined #gluster
22:18 xiu hi, I have a cluster with 6 nodes in replication mode (rf2), version 3.3.1, I started a rebalance and since then I cannot ls some directories
22:19 xiu I can't seem to find any error in the logs
22:22 purpleidea jackdpeterson: if you'd like this feature, it's easy to patch in. just make sure to give a good reason in the commit message, and include the doc patches too please :)
22:22 badone_ joined #gluster
22:22 semiosis purpleidea++
22:22 glusterbot semiosis: purpleidea's karma is now 6
22:22 semiosis purpleidea++
22:22 glusterbot semiosis: purpleidea's karma is now 7
22:22 purpleidea whoa!
22:26 xiu anyone ? :(
22:28 semiosis xiu: how about if you make a new glusterfs client mount point?  can you list directories there?
22:28 xiu tried to unmount remount it, no luck, trying with a new one
22:29 xiu I have in my cluster one node that is running 3.3.2, it seems that this one can list files in these directories
22:30 xiu semiosis: on a new mount point i get an "Invalid argument" error
22:31 xiu but it seems that I can list more directories than on the other
22:32 semiosis ahh, mismatched versions
22:33 xiu even on minor versions ?
22:35 xiu semiosis: even if I stop this node, I can't access my directories
22:36 semiosis xiu: you may need to stop & restart the whole volume
22:36 xiu ok trying
22:40 xiu seems to have done the trick
22:40 xiu I still have one directory that is unaccessible with an Invalid argument error though
22:44 xiu can't even mv that directory, any idea?
22:48 xiu semiosis: ?
22:52 xiu nvm :)
22:52 purpleidea semiosis: comments appreciated: https://github.com/purplei​dea/puppet-gluster/pull/34
22:52 purpleidea thx bbl
22:52 Rapture joined #gluster
23:03 B21956 left #gluster
23:06 kminooie at the volume creation time, do bricks need to be empty? is it possible to resue existing brick directories in a new cluster ?
23:07 kminooie if I nuke my current volume and create a new one, can I use the existing bricks from the old volume to retain my data?
23:08 kminooie JoeJulian: is this possible ? ^^^^
23:09 JoeJulian as long as your new volume has the same layout.
23:30 wushudoin joined #gluster
23:34 xiu joined #gluster
23:36 plarsen joined #gluster
23:49 xiu semiosis: managed to get it back to work by removing the replica that had a 3.3.2 node
23:49 xiu thanks for your help
23:49 semiosis great!  you're welcome
23:58 kminooie is there any plan for 3.6.3 release?
23:58 vipulnayyar joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary