Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 kminooie hi everyone, can anyone tell me what this error actually means 'Mar 12 16:56:55 locsrv1 rpc.mountd[3481]: refused mount request from 192.168.0.100 for /home-fs/kaveh (/home-fs/kaveh): unmatched host' ?? this is a gluster volume that I am trying to mount via nfs. locsrv1 is both the server and client here and has this 192.168.0.100 ip addr.
00:02 slappers left #gluster
00:04 kkeithley1 joined #gluster
00:04 kminooie i am also getting this in syslog that I think is relevant here :
00:04 kminooie Mar 12 17:02:13 locsrv1 sm-notify[2229]: DNS resolution of locsrv1.0.168.192.in-addr.arpa failed; retrying later
00:04 kminooie I don't know why I am getting this, I have dns server with the reverse zone and everything is working fine.  I am not getting any of these error on the other bricks that are part of the same volume
00:04 JoeJulian is nfsd running maybe?
00:05 kminooie how would that cause a dns lookup to fail?
00:06 JoeJulian I'm not trying to diagnose why nfsd does what it does.
00:06 gdubreui joined #gluster
00:07 kminooie as far as I can say everything is running ?
00:07 JoeJulian @nfs
00:07 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
00:07 JoeJulian gotta run
00:08 kminooie thank you, yes nfsd is running. thanks
00:11 kminooie @nfs
00:11 glusterbot kminooie: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
00:19 failshell joined #gluster
00:21 failshel_ joined #gluster
00:24 velladecin finally a split-brian... haleluyaaaa
00:31 siel_ joined #gluster
00:33 badone_ joined #gluster
00:35 ultrabizweb joined #gluster
00:36 tokik joined #gluster
00:38 failshel_ joined #gluster
00:44 neurodrone__ joined #gluster
00:49 glusterbot` joined #gluster
01:01 failshell joined #gluster
01:01 bala joined #gluster
01:02 yinyin joined #gluster
01:14 Ark joined #gluster
01:16 wrale-josh joined #gluster
01:17 wrale-josh is there any way i can move ldaps (TCP 636) glusterfs is running to a different port?  I would like to install FreeIPA on the same metal host (in the same network namespace) as GlusterFS.  They both want to control tcp 636.  Is glusterfs configurable in this way?
01:24 Alex Gluster is running an LDAP server?
01:25 wrale-josh I ran: lsof -i :636 .. turned up "glusterfs" as the command behind the port
01:26 wrale-josh i could be confused, though
01:27 Alex Curious. I must be missing something, as I can't really work out where glusterfs would use it :) assume netstat -npa | grep LISTEN | grep 636 shows the same?
01:27 wrale-josh usually am
01:28 velladecin gluster may be using that port but it definitely is not LDAPS. What is said in /etc/services is more like a guide, but you can run eg SSH on other ports than 22 also..
01:28 wrale-josh no, that's curious, too.. i tried the netstat way, no listening there..  the result of the lsof command is :   .. glusterfs 2242 root   65u  IPv4  10779      0t0  TCP core-n1.storage-s0.example.vpn:ldaps->core-n2.storage-s0.example.vpn:49160 (ESTABLISHED)
01:28 wrale-josh yeah.. my fault about assuming port to service mapping.. still. i wonder why it's listening here
01:29 velladecin tcpdump it and you will see what it is
01:29 Alex That's not listening, that's an outbound connection with a source port of 636, isn't it?
01:29 velladecin yes
01:30 wrale-josh well then.. that's interesting.. if i do 'nc -l 636' i get "address already in use"
01:31 Alex wouldn't that be nc -l -p 636?
01:31 Alex or is that just distro dependent?
01:32 wrale-josh alex.. that could well be.. i'm not a frequent user of nc.. will try that.. sorry for the trouble
01:32 Alex No trouble here :) It's a curious issue.
01:33 failshell joined #gluster
01:33 velladecin I'm not 100% sure about this but if the port is in use (out) then it cannot be used again for (in) nc -l
01:33 velladecin therefore 'in use'
01:33 wrale-josh velladecin: that makes sense
01:33 wrale-josh outbound on a low port.. hmmm..
01:38 wrale-josh i have distributed replica volumes across six nodes.. it seems glusterfs on node-2 is connected via port 49160 to port  636 on node-1...
01:38 wrale-josh glusterfs owns both processes
01:39 chirino_m joined #gluster
01:42 wrale-josh looking at this again: here is "lsof -nPi :636" for n1: glusterfs 2242 root   65u  IPv4  10779      0t0  TCP 10.30.3.1:636->10.30.3.2:49160 (ESTABLISHED)   ... and now from n2: glusterfs 2199 root   13u  IPv4  13805      0t0  TCP 10.30.3.2:49160->10.30.3.1:636 (ESTABLISHED)
01:45 wrale-josh okay so 49160 is the listener, and it listens on all six nodes... so yes, this n1 is going outbound 636 to n2... not cool
01:45 kdhananjay joined #gluster
01:46 wrale-josh wow @ 'lsof -nPi :49160'... it's eating my <1024 ports alive for outbound traffic
01:47 wrale-josh (on all six hosts)
01:48 Alex What's your setting for server.allow-insecur?
01:48 Alex server.allow-insecure, even
01:49 Alex Ah, actually, that's kind of redundant, since it shouldn't make a difference. Just confusing as the language on http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented is slightly misleading
01:49 glusterbot Title: Documenting the undocumented - GlusterDocumentation (at www.gluster.org)
01:49 wrale-josh It's on, because I read somewhere qemu wouldn't work well without it.. Good catch
01:50 wrale-josh Excellent catch, really.. Now, I wonder why I would want that on for qemu..
01:51 wrale-josh Maybe I'd run out of ports with so many VMs flying around?
01:51 wrale-josh time for research.. thanks!
01:51 Alex tbh I don't know what direct effect it will have - but worth seeing, I guess. Er, maybe, yeah. From reading the documentation I would kind of expect it to do exactly the opposite (https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/chap-User_Guide-Managing_Volumes.html) but... who knows :)
01:51 glusterbot Title: Chapter 10. Managing Red Hat Storage Volumes (at access.redhat.com)
01:53 wrale-josh Alex: i agree completely.. i remember reading that part last week, come to think of it.. And yeah.. seems it would be the opposite direction
01:53 wrale-josh but then.. where there is a server, there is probably a client. one side has to be the below 1024 guy
01:54 wrale-josh Could definitely be clarified.. a lot
01:54 wrale-josh Note to self: glusterfs wants its own network namespace
02:12 y4m4 joined #gluster
02:22 y4m4 joined #gluster
02:23 wrale-josh just to confirm.. i disabled server.allow-insecure off on all volumes , stopped and started them .. now most of the port action is in the 270-370 range
02:25 gdubreui joined #gluster
02:28 Alex what's set in /proc/sys/net/ipv4/ip_local_port_range ?
02:31 gdubreui joined #gluster
02:32 wrale-josh '3276861000'  I wonder if my "tuned" profile of "throughput-performance" set that higher
02:33 saurabh joined #gluster
02:33 wrale-josh looks like the default.. never mind..
02:33 wrale-josh (did f19 vs. c6)
02:33 wrale-josh both said the same
02:35 Alex I was hoping for some reason ti was set to '0 1024' or something insane ;-)
02:36 wrale-josh oh .. nice!  now 389 is taken..
02:36 wrale-josh which of course means freeipa can't have it for ldap
02:37 wrale-josh i think it's the mount
02:37 * Alex reads https://bugzilla.redhat.com/show_bug.cgi?id=762989
02:37 glusterbot Bug 762989: low, high, ---, rabhat, CLOSED CURRENTRELEASE, Possibility of GlusterFS port clashes with reserved ports
02:37 Alex What age of GlusterFS are you using, OOI?
02:38 wrale-josh glusterfs 3.5.0beta3 built on Feb 11 2014 20:28:07
02:38 Alex From reading that, entirely insanely, it seems to just pick ports at random, ignoring /proc/sys/net/ipv4/ip_local_port_range. I do not know if this is true, and I do not have the problem or a similarly new copy of Gluster to test.
02:39 Alex I'd be interested to know if you got the same behaviour in a released version of gluster
02:39 Alex I have stopped running beta software since I had too little time in my life for Gentoo, so I'm afraid I can't easily give it a go :)
02:39 wrale-josh Alex: that is truly insane.. i've never seen such.. lol..
02:40 wrale-josh Agreed on the beta versions of things.. I regret it now, of course
02:41 wrale-josh lol at : before binding to any port check if it is a reserved port) merged in master
02:47 wrale-josh i think it's funny that they want to look at /proc/sys/net/ipv4/ip_local_port_range in making their decision.  what if freeipa does the same.. neither would get the port(s) desired
02:47 wrale-josh freeipa or any other app
02:48 semiosis afaik, since the beginning of time all gluster processes run as root and use privileged source ports (<1024) when making connections
02:48 wrale-josh semiosis: seems that the bad idea persisted
02:49 semiosis wrale-josh: no kidding
02:49 semiosis JoeJulian once mentioned a utility command that would bind to a port to reserve it, thus causing glusterfs (or whatever) to choose a different port
02:49 semiosis i dont remember the command but am trying to find it for you
02:50 semiosis JoeJulian: if you're around, HALP
02:50 wrale-josh thanks very much.. i'll look for it , too
02:51 semiosis ah, it's called 'portreserve' but i have no idea what provides it or how to install it.  http://linux.die.net/man/1/portreserve
02:51 glusterbot Title: portreserve(1) - Linux man page (at linux.die.net)
02:51 semiosis good luck
02:51 wrale-josh semiosis: thanks
02:51 semiosis yw
02:52 semiosis so you might be able to use that somehow during boot, maybe just add it to some existing gluster initscript
02:52 semiosis what distro are you on btw?
02:52 wrale-josh f19
02:54 semiosis well if that uses sysv initscripts it should be easy.  if that uses systemd you're on your own, no idea
02:55 wrale-josh agreed.. systemd here.. plus, freeipa wants to first check these ports interactively before creating my replica .. i can override, but i'm not yet comfortable doing so
02:57 semiosis hmm you lost me. i assumed you had some daemon starting at boot that wanted a port gluster already bound
02:57 semiosis which is usually why people complain about the priv ports
02:57 kdhananjay left #gluster
02:57 kdhananjay joined #gluster
02:58 wrale-josh sorry for the confusion.. it's really both interactive and a daemon.. here'goes..   I want to install FreeIPA which is like a ldap, bind, kdc metaproject.. (cont.)
02:59 wrale-josh First, they want you to install FreeIPA yum package, but then you have to run an install script and answer their questions before the configuration is written to /etc/ and beyond... That install script includes port checks.. As freeipa wants to be sure it can occupy the ports (e.g. 389, 636, 88, so on) , before setting up the daemon
03:00 semiosis wrale-josh: well, here goes a guess... use lsof to find out which gluster proc is using the port you want.  kill it.  do your freeipa thing.  restart glusterd
03:00 semiosis ymmv, don't blame me if things break :)
03:01 wrale-josh semiosis: that's a good idea.. i think i'll try unmounting and turning my volumes off.. that should clear the ports gracefully.. not sure what race condition i'll be walking into, but good suggestion.. thanks
03:01 semiosis yw
03:05 semiosis gotta run, good luck
03:05 * semiosis afk
03:09 failshell joined #gluster
03:12 glusterbot New news from resolvedglusterbugs: [Bug 762989] Possibility of GlusterFS port clashes with reserved ports <https://bugzilla.redhat.com/show_bug.cgi?id=762989>
03:14 bharata-rao joined #gluster
03:19 satheesh joined #gluster
03:30 harish joined #gluster
03:34 vpshastry joined #gluster
03:38 shubhendu joined #gluster
03:42 nshaikh joined #gluster
03:47 hagarth joined #gluster
03:47 RameshN joined #gluster
03:56 tokik joined #gluster
04:01 mohankumar__ joined #gluster
04:03 aravindavk joined #gluster
04:11 itisravi joined #gluster
04:11 askb joined #gluster
04:18 vpshastry joined #gluster
04:21 vpshastry joined #gluster
04:37 latha joined #gluster
04:40 ravindran joined #gluster
04:40 ndarshan joined #gluster
04:42 dusmant joined #gluster
04:48 deepakcs joined #gluster
04:53 spandit joined #gluster
05:06 hagarth joined #gluster
05:11 saurabh joined #gluster
05:14 prasanth joined #gluster
05:16 ppai joined #gluster
05:17 vpshastry1 joined #gluster
05:20 hagarth joined #gluster
05:23 askb joined #gluster
05:23 sks joined #gluster
05:24 fidevo joined #gluster
05:33 bala1 joined #gluster
05:45 fidevo joined #gluster
05:45 neurodrone__ joined #gluster
05:52 benjamin_____ joined #gluster
05:55 mohankumar__ joined #gluster
05:56 shylesh joined #gluster
05:57 lalatenduM joined #gluster
06:05 raghu joined #gluster
06:33 psharma joined #gluster
06:34 rastar joined #gluster
06:39 mohankumar__ joined #gluster
06:45 vimal joined #gluster
06:48 mohankumar__ joined #gluster
06:52 rastar joined #gluster
06:54 mohankumar__ joined #gluster
07:12 chandan_kumar joined #gluster
07:12 edward1 joined #gluster
07:17 stickyboy Still getting Stale File Handle issues when accessing a file from separate hosts within a short time span: http://supercolony.gluster.org/pipermail/gluster-users/2014-March/039429.html
07:17 glusterbot Title: [Gluster-users] Stale file handle with FUSE client (at supercolony.gluster.org)
07:20 ricky-ticky joined #gluster
07:21 pk1 joined #gluster
07:23 jtux joined #gluster
07:28 ekuric joined #gluster
07:40 rahulcs joined #gluster
07:44 rahulcs joined #gluster
07:46 jtux joined #gluster
07:55 pk2 joined #gluster
07:55 ctria joined #gluster
07:56 andreask joined #gluster
07:56 Pavid7 joined #gluster
07:56 rossi_ joined #gluster
08:03 eseyman joined #gluster
08:13 ngoswami joined #gluster
08:14 franc joined #gluster
08:14 franc joined #gluster
08:15 cjanbanan joined #gluster
08:15 Alpinist joined #gluster
08:20 Copez joined #gluster
08:21 Copez question about VMware NFS and GlusterFS / Healing problem
08:22 hagarth left #gluster
08:23 hagarth joined #gluster
08:25 DV joined #gluster
08:30 mohankumar__ joined #gluster
08:35 hybrid512 joined #gluster
08:40 hybrid512 joined #gluster
08:45 getup- joined #gluster
09:04 calum_ joined #gluster
09:12 tryggvil joined #gluster
09:19 ndevos Copez: just ask your question and see if one of us ,,(volunteers) can help you
09:19 glusterbot Copez: A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
09:19 ndevos hmm, maybe its ,,(hi)?
09:19 glusterbot Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:30 tokik joined #gluster
09:35 vpshastry joined #gluster
09:43 andreask joined #gluster
09:44 ajha joined #gluster
09:54 qdk joined #gluster
09:55 rahulcs joined #gluster
09:56 rastar joined #gluster
10:00 dkorzhevin joined #gluster
10:06 getup- joined #gluster
10:15 vpshastry2 joined #gluster
10:15 aravindavk joined #gluster
10:17 Copez When a gluster-node get out of sync, and afterwards the gluster-node start healing, the gluster-proeces goes to 100% (instead of 300-450%) and the VM's stop responding till the gluster-nodes are in sync
10:17 Copez is there anyway to prevent or to tune this?
10:18 Copez VMware-node talks on 10GBE to the Gluster-nodes (10GBE)
10:22 vsa joined #gluster
10:23 vsa Hi all! Is in gluster 3.4.2 quota limit  working?
10:32 vsa Hi all! Is in gluster 3.4.2 quota limit  working?
10:34 aravindavk joined #gluster
10:35 pk2 left #gluster
10:35 pk2 joined #gluster
10:39 raptorman joined #gluster
10:39 FarbrorLeon joined #gluster
10:40 al joined #gluster
10:45 ndevos Copez: what version of glusterfs is that?
10:50 chirino joined #gluster
10:53 ppai joined #gluster
10:54 vsa Hi all! Is in gluster 3.4.2 quota limit  working?
10:54 spandit joined #gluster
11:04 ndevos vsa: I'm not aware of any particular problems, but I can't know everything. Do you have a problem with quota?
11:08 rahulcs joined #gluster
11:11 Pavid7 joined #gluster
11:18 ajha joined #gluster
11:23 harish joined #gluster
11:24 getup- joined #gluster
11:26 Staples84 joined #gluster
11:27 vpshastry2 joined #gluster
11:29 tryggvil joined #gluster
11:32 rahulcs joined #gluster
11:37 hagarth joined #gluster
11:38 kam270_ joined #gluster
11:40 kam270_ we have discoverd how Jews are using social pathologies to take over the west : http://www.scribd.com/doc/98146955/Psychological-warfare-in-politics-Neo-Conservatism-s-failed-attempts-to-sabotage-humanism
11:40 glusterbot Title: Psychological warfare in politics. - Neo-Conservatism's failed attempts to sabotage humanism. (at www.scribd.com)
11:40 kam270_ left #gluster
11:45 rahulcs joined #gluster
11:53 TvL2386 joined #gluster
11:53 mohankumar__ joined #gluster
11:58 ppai joined #gluster
12:04 tokik joined #gluster
12:06 shyam joined #gluster
12:07 ctrianta joined #gluster
12:08 Psi-Jack I'm curious, if anyone knows off-hand.. Does Proxmox VE 3.1's glusterfs support utilize glusterfs's bd functionality, or does it just use it's filesystem layer?
12:09 itisravi joined #gluster
12:13 rwheeler joined #gluster
12:18 mohankumar__ joined #gluster
12:19 Psi-Jack Apparently not (on the asked question). Uses it at the filesystem layer. :/
12:21 lalatenduM left #gluster
12:22 yinyin joined #gluster
12:24 chirino joined #gluster
12:44 plarsen joined #gluster
12:50 neurodrone__ joined #gluster
12:53 chirino joined #gluster
12:55 DV_ joined #gluster
12:56 failshell joined #gluster
12:58 bennyturns joined #gluster
12:59 failshel_ joined #gluster
13:01 B21956 joined #gluster
13:03 ccha do files inside xattrop need to be self heal ?
13:03 harish_ joined #gluster
13:14 rwheeler joined #gluster
13:16 sroy joined #gluster
13:17 rahulcs joined #gluster
13:17 pk1 joined #gluster
13:18 Ark joined #gluster
13:19 benjamin_____ joined #gluster
13:20 lpabon joined #gluster
13:23 kam270_ joined #gluster
13:26 kdhananjay joined #gluster
13:29 tjikkun_work joined #gluster
13:30 robo joined #gluster
13:30 jtux joined #gluster
13:31 pk1 left #gluster
13:32 Pavid7 joined #gluster
13:33 robo joined #gluster
13:38 gmcwhistler joined #gluster
13:42 kam270_ joined #gluster
13:43 seapasulli joined #gluster
13:44 theron joined #gluster
13:47 satheesh1 joined #gluster
13:47 tokik joined #gluster
13:48 rfortier joined #gluster
13:48 harish_ joined #gluster
13:50 robo joined #gluster
13:55 chirino joined #gluster
13:58 ndk joined #gluster
14:01 failshell joined #gluster
14:08 kam270_ joined #gluster
14:09 DV__ joined #gluster
14:14 jmarley joined #gluster
14:14 jmarley joined #gluster
14:17 tdasilva joined #gluster
14:17 failshel_ joined #gluster
14:18 ctrianta joined #gluster
14:18 Psi-Jack joined #gluster
14:22 Kins joined #gluster
14:22 diegows joined #gluster
14:24 robo joined #gluster
14:25 jobewan joined #gluster
14:30 aravindavk joined #gluster
14:37 ctrianta joined #gluster
14:38 lmickh joined #gluster
14:41 rpowell joined #gluster
14:48 glusterbot` joined #gluster
14:53 natgeorg joined #gluster
14:53 natgeorg joined #gluster
14:56 elyograg ccha: entries in xattrop are used by the self-heal daemon.  those entries are the used to show you the information on 'gluster volume heal $vol info'.
14:56 elyograg at least that's my understanding.
14:58 jtux joined #gluster
15:00 lpabon_ joined #gluster
15:02 elyograg the SHD will go through the list and attempt to heal the entries. if it fails, they stay in the list.
15:03 ThatGraemeGuy joined #gluster
15:04 lpabon_ joined #gluster
15:07 benjamin_____ joined #gluster
15:15 rahulcs joined #gluster
15:17 sks joined #gluster
15:18 keytab joined #gluster
15:19 RayS joined #gluster
15:21 ccha elyograg: what do you mean by SHD ?
15:23 elyograg self heal daemon.
15:26 kissand joined #gluster
15:26 kissand hi some help please
15:26 ccha ok
15:26 kissand i've setup two nodes/bricks with gluster in Replicate node
15:27 kissand I create a file
15:27 kissand and then i cause a split brain, and i touch the file from one side
15:27 kkeithley1 joined #gluster
15:27 kissand after comunication restored this file is not selfhealing
15:28 ccha elyograg: I have entries filenames like ffe52d10-a894-45ae-9a69-9a4e8645dded or like xattrop-8597c176-e9b1-4b4d-b17e-f8508549e388
15:28 jag3773 joined #gluster
15:28 kissand should it be autohealing by choosing the version that changed during splitbrain?
15:28 ccha what are differents ?
15:29 kissand one has file lines with 1 2 3 4 5
15:29 kissand and the newer one has a sixth line with 6
15:29 ccha kissand: nope, you need to acces(stat) to the file from client
15:29 elyograg kissand: how is it supposed to automatically know which one you want to keep? without quorum, it can't make that determination -- it leaves it up to the admin.
15:29 ccha to trigger self healing
15:30 deepakcs joined #gluster
15:30 elyograg quorum requires a majority -- a replica 2 volume, by definition, cannot achieve quorum unless both are up.
15:31 ccha elyograg: why some entries have as filename xattrop and some not ?
15:31 elyograg ccha: I don't know a lot of details about how those entries work.
15:31 kissand elyograg since the file was uptodate on both servers , and after splitbrain the file was modified in one side, should the other server adopto that version too?
15:33 kissand second question. how do i start selfhealing and after failure how do i choose a version?
15:33 elyograg kissand: once you get into split brain and gluster does not know what action to take, it will not fix the situation.  If you are changing the brick directly and not going through the gluster mount, then there will be problems.
15:33 elyograg @split-brain
15:33 glusterbot elyograg: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
15:34 kissand elyograg no every change i made was on the gluster mount (client)
15:34 kissand thnx bot :>
15:34 jag3773 joined #gluster
15:37 elyograg off to work!  afk for commute.
15:43 robo joined #gluster
15:48 tokik joined #gluster
16:02 vpshastry joined #gluster
16:07 Matthaeus joined #gluster
16:11 ndk joined #gluster
16:15 Matthaeus joined #gluster
16:15 Mo_ joined #gluster
16:17 robo joined #gluster
16:18 ira_ joined #gluster
16:18 ctrianta joined #gluster
16:21 hagarth joined #gluster
16:22 japuzzo joined #gluster
16:22 seapasulli joined #gluster
16:22 vpshastry left #gluster
16:27 philv76 joined #gluster
16:33 robo joined #gluster
16:39 DV__ joined #gluster
16:48 Leolo joined #gluster
16:51 zerick joined #gluster
16:52 seapasulli_ joined #gluster
17:01 rahulcs joined #gluster
17:07 mihalnat joined #gluster
17:14 Slasheri joined #gluster
17:15 partner joined #gluster
17:18 harish_ joined #gluster
17:20 chirino joined #gluster
17:23 jdarcy joined #gluster
17:24 robo joined #gluster
17:26 vpshastry joined #gluster
17:29 rshade98 joined #gluster
17:29 rshade98 is there a complete list of volume options somewhere?
17:31 rshade98 specifically the management volume
17:31 semiosis see 'gluster volume set help' and also ,,(undocumented options) -- although idk if that pertains to the management volume
17:31 glusterbot Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
17:32 JoeJulian Best place to find options for management translators is in the source.
17:41 YazzY joined #gluster
17:41 YazzY joined #gluster
17:47 slayer192 joined #gluster
17:48 Matthaeus joined #gluster
17:50 josh-wrale joined #gluster
17:51 lpabon joined #gluster
17:52 Matthaeus joined #gluster
17:52 vpshastry left #gluster
18:01 smithyuk1 semiosis: thanks for that link! i was looking for exactly that the other day
18:02 robos joined #gluster
18:03 zaitcev joined #gluster
18:06 rshade98 actually maybe giving the specific error might help. I am getting these in the logs. But glusterd is listening
18:06 rshade98 [socket.c:1494:__socket_proto_state_machine] 0-tcp.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1023)
18:06 glusterbot rshade98: That's just a spurious message which can be safely ignored.
18:06 semiosis smithyuk1: yw
18:06 rshade98 I like glusterbot
18:06 lalatenduM joined #gluster
18:12 jim80net joined #gluster
18:24 johnmark I HAVE THE POWER!!!
18:25 semiosis johnmark: welcome back
18:26 svbcom joined #gluster
18:27 johnmark semiosis: hey, great news on your stuff
18:28 johnmark getting your java fs stuff into the maven repos
18:28 semiosis yeah!  achievement unlocked!
18:28 johnmark w00t
18:38 sputnik13 joined #gluster
18:43 clutchk joined #gluster
18:44 jdarcy joined #gluster
18:47 FarbrorLeon joined #gluster
18:48 crazifyngers joined #gluster
18:50 sputnik1_ joined #gluster
18:50 Philambdo joined #gluster
19:01 cjh973 joined #gluster
19:01 chirino joined #gluster
19:01 qdk joined #gluster
19:01 _dist joined #gluster
19:03 Matthaeus1 joined #gluster
19:06 straylyon joined #gluster
19:11 sputnik1_ joined #gluster
19:12 tryggvil joined #gluster
19:28 RayS joined #gluster
19:30 cjanbanan joined #gluster
19:53 jag3773 joined #gluster
20:01 voronaam joined #gluster
20:02 tokik joined #gluster
20:02 voronaam Hi All. Is it possible to replace-brick on glusterfs 3.4.0 when the old brick is offline?
20:03 voronaam I have FS with two bricks, replicated twice. So all the data is present on the sole survived brick
20:03 semiosis you can replace brick commit force
20:03 semiosis if the old brick is down
20:03 semiosis see ,,(replace) for info
20:03 glusterbot Useful links for replacing a failed server... if replacement server has different hostname: http://web.archive.org/web/20120508153302/http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/ ... or if replacement server has same hostname:
20:03 glusterbot http://goo.gl/rem8L
20:03 voronaam Is it somehow dangerous? I am deciding on how much effort I should put in fixing the old one
20:03 voronaam Thank you for the links!
20:04 semiosis yw
20:05 jim80net joined #gluster
20:08 rotbeard joined #gluster
20:27 Philambdo joined #gluster
20:27 robos joined #gluster
20:32 chirino joined #gluster
20:33 voronaam Is it possible to change replica number to 1?
20:34 _dist voronaam: yes, but then it becomes a distribute volume not replicate
20:35 voronaam But I would be able to change it back to 2 later on, right?
20:35 Staples84 joined #gluster
20:35 robos joined #gluster
20:36 voronaam I can not find how to do it. Is it gluster volume <name> set replica 1 ?
20:38 badone_ joined #gluster
20:38 voronaam Oh, it must be remove-brick <name> replica 1. Sorry
20:43 jim80net joined #gluster
20:46 Staples84 joined #gluster
20:54 rwheeler joined #gluster
20:59 cjanbanan joined #gluster
21:30 voronaam joined #gluster
21:31 voronaam Hi again. How to resolve "State: Peer Rejected (Connected)" error? That is how it is seen from the old server
21:31 Guest23977 try to probe from the server you are probing too.
21:32 voronaam Ah, sorry. That is the old broken server. I need to take a moment to calm down, I am starting to see wrong things. Sorry
21:33 chirino joined #gluster
21:34 FarbrorLeon joined #gluster
21:34 rahulcs joined #gluster
21:36 sputnik1_ joined #gluster
21:38 robo joined #gluster
21:49 cjanbanan joined #gluster
21:50 jim80net joined #gluster
21:51 orion123123 joined #gluster
21:54 voronaam minor issue. When I was trying to add a brick on server A from cli on server B it did not work. It was giving me an error "add brick failed" with no further explanation. When I executed the same command from Server A, it told me that I am trying to create a brick on the root partition. Which was indeed a mistake on my part. The minor issue is that the Cli did not print out that error on server B.
21:56 sputnik1_ joined #gluster
21:58 Elico joined #gluster
22:00 orion123223 joined #gluster
22:01 cjanbanan joined #gluster
22:02 zerick joined #gluster
22:04 chirino joined #gluster
22:11 sputnik1_ joined #gluster
22:12 cjanbanan joined #gluster
22:12 kkeithley1 joined #gluster
22:12 Elico joined #gluster
22:18 sputnik1_ joined #gluster
22:26 gdubreui joined #gluster
22:26 sputnik1_ joined #gluster
22:29 zerick joined #gluster
22:35 cjanbanan joined #gluster
22:42 cjanbanan joined #gluster
22:46 Ark joined #gluster
22:46 theron joined #gluster
22:49 jbrooks joined #gluster
22:57 cjanbanan joined #gluster
23:01 Elico joined #gluster
23:14 rpowell1 joined #gluster
23:17 kam270 joined #gluster
23:18 ninkotech__ joined #gluster
23:23 elyograg now I am having the same problem with this damn volume that I did before the upgrade.  ongoing copies (rsync) hang.  system load on one server starts going through the roof.  Suddenly there are 130108 entries in the xattrop directories on both servers.
23:26 elyograg I didn't know about the xattrop when I was running 3.3.1.
23:27 cjanbanan joined #gluster
23:31 elyograg I don't have any idea WTF is happening this time to cause this on 3.4.2, any more than I did on 3.3.1.
23:32 elyograg how can I find out?
23:36 ShanghaiScott joined #gluster
23:38 seapasulli_ joined #gluster
23:39 ninkotech_ joined #gluster
23:46 harish_ joined #gluster
23:52 cjanbanan joined #gluster
23:53 voronaam left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary