Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 CyrilPeponnet (I stop the geo, stop the volume, start the volume, and start the geo will see if errors still occurs)
00:01 CyrilPeponnet at least someone got the same error in september but no reponses : http://www.gluster.org/pipermail/glus​ter-users/2014-September/018823.html
00:01 CyrilPeponnet I'm don't thing so much people are using geo-rep
00:02 JoeJulian huh.. null gfid. That's odd.
00:04 JoeJulian CyrilPeponnet: Nothing's writing directly to the bricks, right?
00:04 CyrilPeponnet no except the geo-rep
00:04 CyrilPeponnet witch was empty
00:04 JoeJulian Oh, wait... no, that's after a memcpy in which it was already verified that uuid was not null.
00:05 JoeJulian So that means that memcpy failed.
00:05 JoeJulian Are you out of memory?
00:05 CyrilPeponnet Mem:      65774632   10949944   54824688
00:05 CyrilPeponnet nope
00:07 CyrilPeponnet here is the full error (repeating) : https://gist.github.com/Cyril​Peponnet/7e02cf9e81f83ef9f456
00:08 CyrilPeponnet sounds that everything start with replicate-0: unable to lock on even one child
00:09 harish_ joined #gluster
00:09 hflai_ joined #gluster
00:09 CyrilPeponnet the setup is a two node with one vol as replica 2 (empy)
00:12 CyrilPeponnet underlying fs is xfs
00:16 hflai_ joined #gluster
00:21 CyrilPeponnet @JoeJulian are you still diving on source code ? :p
00:23 JoeJulian I can't see any direct reason for your error nor not to use 3.5.3 for the remote. I don't understand the "join the 2 others" though.
00:24 _polto_ joined #gluster
00:24 CyrilPeponnet well
00:25 CyrilPeponnet I guess I can give a try with 3.5.3
00:26 CyrilPeponnet can I stop the geo-rep update to 3.5.3 and restart the geo-rep ? or I have to remove everything and start from scrach ?
00:26 SOLDIERz_ joined #gluster
00:28 JoeJulian Ah, yes. you def want 3.5.3. bug 1153626 is more that likely causing that error you're seeing.
00:28 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1153626 unspecified, unspecified, ---, pkarampu, CLOSED CURRENTRELEASE, Sizeof bug for allocation of memory in afr_lookup
00:29 JoeJulian Yes, stop, upgrade, start
00:29 CyrilPeponnet Ok I'll do that tomorow
00:30 CyrilPeponnet Thanks JoeJulian as usual :)
00:30 JoeJulian You're welcome.
00:30 JoeJulian Go to my blog and donate to open cancer research. :D
00:33 CyrilPeponnet I've read your blog (actually all glusterfs posts)
00:33 JoeJulian Hope it was helpful.
00:33 CyrilPeponnet sure it was
00:36 JoeJulian I took out my tip link and replaced it with a donate link. I decided I don't want tips and would rather people thank me by trying to help the world. I'm going to pressure all my peers to do something similar.
00:40 zerick_ joined #gluster
00:45 bala joined #gluster
00:47 JohnD joined #gluster
00:48 _zerick_ joined #gluster
00:54 jmarley joined #gluster
00:57 topshare joined #gluster
01:03 topshare joined #gluster
01:11 davidbitton joined #gluster
01:13 plarsen joined #gluster
01:29 hagarth joined #gluster
01:34 sputnik13 joined #gluster
01:38 Guest67002 joined #gluster
01:41 Guest67002 joined #gluster
01:44 T3 joined #gluster
01:46 jjakestr8 joined #gluster
01:52 topshare joined #gluster
01:56 jjakestr8 Apologies - First time IRC user
01:56 jjakestr8 I was going to setup a gluster cluster for my first time on AWS
01:56 jjakestr8 Wanted to ask a couple of questions:
01:57 sputnik13 joined #gluster
01:57 jjakestr8 1) My cluster would be used for a SQL database (postgresql)
01:57 jjakestr8 Any advice in the setup?
01:57 JohnT joined #gluster
01:58 jjakestr8 2) I'm using a flavor of Postgresql that allows for multiple parallel processes, and multiple nodes
01:58 jjakestr8 Can I put those process upon the same instances that I'm running Gluster?
01:59 jjakestr8 b) can I get any type of local disk access for the process that need access to the local data ?
02:04 topshare joined #gluster
02:27 DV joined #gluster
02:30 lalatenduM joined #gluster
02:36 JohnT joined #gluster
02:38 sputnik13 joined #gluster
02:43 haomaiwa_ joined #gluster
02:46 sputnik13 joined #gluster
03:06 topshare joined #gluster
03:08 RalphWiggum joined #gluster
03:09 topshare joined #gluster
03:18 nangthang joined #gluster
03:25 topshare joined #gluster
03:31 JoeJulian jjakestr8: yes, you can. I don't have any advice for postgresql. I do know mysql tips but that's not what you asked for. Yes, you can have postgresql on the gluster servers. The file reads attempt to read from localhost first. Writes are synchronous.
03:38 kumar joined #gluster
03:39 malevolent joined #gluster
03:40 xavih joined #gluster
03:48 Guest80749 left #gluster
03:48 ppai joined #gluster
03:49 kdhananjay joined #gluster
03:51 itisravi joined #gluster
04:08 bharata-rao joined #gluster
04:10 kanagaraj joined #gluster
04:14 topshare_ joined #gluster
04:17 shubhendu joined #gluster
04:24 nbalacha joined #gluster
04:24 atinmu joined #gluster
04:26 nishanth joined #gluster
04:28 schandra joined #gluster
04:28 schandra_ joined #gluster
04:29 schandra joined #gluster
04:30 nbalacha joined #gluster
04:31 DV__ joined #gluster
04:33 karnan joined #gluster
04:35 JoeJulian joined #gluster
04:36 jiffin joined #gluster
04:37 anoopcs joined #gluster
04:39 Tetris- joined #gluster
04:40 Tetris- left #gluster
04:41 deepakcs joined #gluster
04:42 jiffin1 joined #gluster
04:46 anoopcs joined #gluster
04:47 soumya_ joined #gluster
04:51 meghanam joined #gluster
04:54 jiffin joined #gluster
04:55 sputnik13 joined #gluster
04:55 rafi1 joined #gluster
04:59 karnan_ joined #gluster
05:08 smohan joined #gluster
05:10 Apeksha joined #gluster
05:12 gem joined #gluster
05:13 gem joined #gluster
05:15 harish_ joined #gluster
05:19 aravindavk joined #gluster
05:23 hagarth joined #gluster
05:23 lalatenduM joined #gluster
05:24 kdhananjay joined #gluster
05:26 ashiq joined #gluster
05:30 ppp joined #gluster
05:40 dusmant joined #gluster
05:43 Bhaskarakiran joined #gluster
05:44 atalur joined #gluster
05:47 ramteid joined #gluster
05:49 pdrakeweb joined #gluster
05:50 kshlm joined #gluster
05:51 plarsen joined #gluster
05:59 Bhaskarakiran joined #gluster
06:02 smohan joined #gluster
06:09 vimal joined #gluster
06:10 corretico joined #gluster
06:16 Manikandan joined #gluster
06:17 RameshN joined #gluster
06:19 dusmant joined #gluster
06:23 R0ok_ joined #gluster
06:27 T3 joined #gluster
06:32 anrao joined #gluster
06:34 nangthang joined #gluster
06:39 jiffin1 joined #gluster
06:40 nshaikh joined #gluster
06:41 raghu joined #gluster
06:41 tigert sigh
06:41 free_amitc_ joined #gluster
06:43 ashiq joined #gluster
06:44 jiffin joined #gluster
06:45 JoeJulian sigh?
06:48 atinmu joined #gluster
06:55 jiffin joined #gluster
06:56 kdhananjay joined #gluster
07:04 SOLDIERz_ joined #gluster
07:15 mbukatov joined #gluster
07:22 rafi joined #gluster
07:24 FingerPistol joined #gluster
07:24 [Enrico] joined #gluster
07:25 atinmu joined #gluster
07:25 FingerPistol left #gluster
07:28 jtux joined #gluster
07:28 o5k__ joined #gluster
07:29 kdhananjay joined #gluster
07:38 GumBall joined #gluster
07:39 GumBall left #gluster
07:40 dusmant joined #gluster
07:47 Manikandan joined #gluster
07:49 lifeofguenter joined #gluster
08:03 haomaiwang joined #gluster
08:10 T0aD joined #gluster
08:13 _polto_ joined #gluster
08:13 T3 joined #gluster
08:16 glusterbot News from newglusterbugs: [Bug 1202673] Perf:  readdirp in replicated volumes causes performance degrade <https://bugzilla.redhat.co​m/show_bug.cgi?id=1202673>
08:16 glusterbot News from newglusterbugs: [Bug 1202675] Perf:  readdirp in replicated volumes causes performance degrade <https://bugzilla.redhat.co​m/show_bug.cgi?id=1202675>
08:17 fsimonce joined #gluster
08:28 topshare joined #gluster
08:53 anil joined #gluster
08:55 liquidat joined #gluster
09:05 topshare joined #gluster
09:06 soumya joined #gluster
09:06 T0aD joined #gluster
09:19 kaushal_ joined #gluster
09:19 atinmu joined #gluster
09:19 ctria joined #gluster
09:23 kovshenin joined #gluster
09:25 [Enrico] joined #gluster
09:29 raging-dwarf does anyone know if i can make a client talk to the server where the mount comes from only (in a replicated-scenario)
09:30 raging-dwarf because i run two storage nodes in different LAN's
09:35 shubhendu joined #gluster
09:38 harish_ joined #gluster
09:39 ndevos raging-dwarf: the logic for replication is done client side, the writes are done to both, the reads come from the server/brick that respons 1st
09:39 ndevos *responds
09:40 jflf joined #gluster
09:40 nbalacha joined #gluster
09:44 bala joined #gluster
09:48 LebedevRI joined #gluster
09:50 gildub joined #gluster
09:54 SOLDIERz__ joined #gluster
10:00 ira joined #gluster
10:02 T3 joined #gluster
10:04 kaushal_ joined #gluster
10:22 overclk joined #gluster
10:27 atinmu joined #gluster
10:33 topshare joined #gluster
10:33 nbalacha joined #gluster
10:37 rjoseph joined #gluster
10:53 Slashman joined #gluster
10:54 mnbvasd left #gluster
11:01 tanuck joined #gluster
11:03 T3 joined #gluster
11:12 firemanxbr joined #gluster
11:15 harish_ joined #gluster
11:17 glusterbot News from resolvedglusterbugs: [Bug 1202737] Mismatching iatt in answers of 'GF_FOP_LOOKUP' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1202737>
11:18 nbalacha joined #gluster
11:19 atinmu joined #gluster
11:19 dusmant joined #gluster
11:19 overclk joined #gluster
11:24 bennyturns joined #gluster
11:31 Bhaskarakiran joined #gluster
11:31 Bhaskarakiran joined #gluster
11:37 _polto_ joined #gluster
11:39 st__ joined #gluster
11:47 glusterbot News from newglusterbugs: [Bug 1202745] glusterd crashed on one of the node <https://bugzilla.redhat.co​m/show_bug.cgi?id=1202745>
11:47 glusterbot News from newglusterbugs: [Bug 1202750] Disperse volume: 'du' on nfs mount throws IO error <https://bugzilla.redhat.co​m/show_bug.cgi?id=1202750>
11:55 ndevos REMINDER: the Gluster Community Bug Triage meeting starts in 5 minutes in #gluster-meeting
11:58 soumya joined #gluster
12:00 overclk joined #gluster
12:03 smohan joined #gluster
12:03 T3 joined #gluster
12:05 harish_ joined #gluster
12:07 topshare joined #gluster
12:08 lalatenduM joined #gluster
12:09 topshare joined #gluster
12:11 ctria joined #gluster
12:13 topshare joined #gluster
12:14 itisravi joined #gluster
12:15 bala joined #gluster
12:15 rjoseph joined #gluster
12:17 glusterbot News from newglusterbugs: [Bug 1202758] Disperse volume: brick logs are getting filled with "anonymous fd creation failed" messages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1202758>
12:18 ashiq joined #gluster
12:30 hagarth joined #gluster
12:31 bene2 joined #gluster
12:33 DV__ joined #gluster
12:34 p0licy joined #gluster
12:37 theron joined #gluster
12:41 rjoseph joined #gluster
12:41 hamiller joined #gluster
12:51 DV__ joined #gluster
12:56 smohan joined #gluster
12:57 jmarley joined #gluster
13:00 theron joined #gluster
13:01 Slashman_ joined #gluster
13:04 topshare joined #gluster
13:05 dusmant joined #gluster
13:08 julim joined #gluster
13:11 dgandhi joined #gluster
13:12 dgandhi joined #gluster
13:13 nbalacha joined #gluster
13:13 hgowtham joined #gluster
13:15 ppp joined #gluster
13:16 ashiq joined #gluster
13:16 topshare joined #gluster
13:20 topshare joined #gluster
13:21 o5k_ joined #gluster
13:22 _polto_ joined #gluster
13:23 raging-dwarf ndevos: thanks for your response, how about clients behind a nat? i know it's possible but is it like opening pandora's box? And generating a lot of issues?
13:24 dbruhn joined #gluster
13:25 _shaps_ joined #gluster
13:25 topshare joined #gluster
13:26 getup joined #gluster
13:28 lifeofguenter joined #gluster
13:28 lalatenduM_ joined #gluster
13:29 T3 joined #gluster
13:32 coredump joined #gluster
13:35 georgeh-LT2 joined #gluster
13:37 _polto_ how is it possible that I experience split-brain all the time ? It seems to be too easy to brake the GlusterFS and make whole directories unremovable .. :(
13:38 topshare joined #gluster
13:39 jiffin joined #gluster
13:43 SOLDIERz__ joined #gluster
13:44 smohan_ joined #gluster
13:45 o5k_ joined #gluster
13:45 gnudna joined #gluster
13:51 jflf _polto_: I have seen such an issue with directories
13:52 jflf In my case, it was because there was an inconsistency between what Gluster thought was in the directories, and what was really there.
13:53 jflf One support person sent me a couple of commands to run so that I could send them back some information
13:54 jflf Have a look at the thread called "Issue with removing directories after disabling quotas" on the mailing list
13:54 jmarley joined #gluster
13:59 topshare joined #gluster
14:02 topshare joined #gluster
14:04 topshare joined #gluster
14:09 jobewan joined #gluster
14:10 _polto_ joined #gluster
14:12 topshare joined #gluster
14:14 topshare joined #gluster
14:18 topshare joined #gluster
14:18 wushudoin joined #gluster
14:23 smohan joined #gluster
14:29 topshare joined #gluster
14:30 jiffin|afk joined #gluster
14:34 cornfed78 joined #gluster
14:35 Guest58 joined #gluster
14:36 topshare joined #gluster
14:39 topshare joined #gluster
14:40 cornfed7_ joined #gluster
14:43 ninkotech joined #gluster
14:43 ninkotech_ joined #gluster
14:44 harish_ joined #gluster
14:45 jiffin joined #gluster
14:45 cornfed7_ Hi all!
14:45 cornfed7_ Trying to debug a weird NFS issue from a Solaris 10 client
14:45 cornfed7_ We have firewalls between most of our hosts and the Gluster server, and we limit the ports to 111 (portmap) and 2049 (nfs)
14:46 cornfed7_ So on Solaris machines, we're using the nfs://<machine>/<path> scheme
14:46 cornfed7_ that works fine when I do it by hand
14:46 cornfed7_ but when it's in an automount, I get I/O errors on the solaris client
14:46 cornfed7_ Mar 17 10:32:28 <client> automountd[19300]: [ID 834250 daemon.error] Mount of nfs://server/share on /mnt/share1: I/O error
14:47 cornfed7_ In the gluster NFS logs I see:
14:47 cornfed7_ http://pastebin.com/Fg0e8zNG
14:47 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:47 cornfed7_ @paste
14:47 glusterbot cornfed7_: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
14:47 cornfed7_ http://fpaste.org/198976/42660366/
14:47 cornfed7_ It's a little weird because the *only* difference is the automount.
14:48 cornfed7_ I'm also testing it from a solaris client on the same subnet as the gluster NFS server
14:48 cornfed7_ so the firewall shouldn't even be an issue, even if it is an RPC problem
14:49 cornfed7_ any thoughts on where to start looking?
14:50 cornfed7_ here's the output from 'rpcinfo -s' on the gluster server
14:50 cornfed7_ http://fpaste.org/198977/03812142/
14:52 Apeksha joined #gluster
14:53 jflf cornfed7_: does this help? http://halisway.blogspot.com/2007/03/s​olaris-10-nfs-version-when-using.html
14:54 vimal joined #gluster
14:55 jflf From your first paste it looks like the only NFS version your Gluster server understands would be 3
14:55 jflf And the Solaris machine is trying to negotiate 4
14:55 cornfed7_ That's a good find, I'm trying it now
14:55 roost joined #gluster
14:56 p0licy joined #gluster
14:56 cornfed7_ hallelujah! Thanks so much! that did the trick
14:57 cornfed7_ i was focused on the gfs/solaris issue, not just the solaris side of things
15:00 jflf cornfed7_: you're welcome, glad to be able to help
15:02 meghanam joined #gluster
15:03 jflf The full Oracle doc for Solaris 11 and NFS is here: http://docs.oracle.com/cd/E23824_​01/html/821-1454/rfsadmin-68.html
15:03 cornfed7_ Thanks. I think I was thrown off by it working when I did it by hand (i.e. "mount nfs://server/mount /mnt") but not working in automount
15:04 cornfed7_ i was googling "gluster solaris autofmount" etc.. if I'd taking the gluster part out, I probably would have found better info
15:04 jflf I wouldn't be surprised if the mount command tried various NFS versions one after the other when you don't specify which one you want.
15:06 ndevos cornfed7_: maybe you need to enable the nfs.mount-udp option to allow the MOUNT (a helper protocol for NFS) to be used over UDP?
15:07 cornfed7_ Thanks ndevos- unfortunately, we're unable to use UDP because of our firewall restrictions. I spell out using "proto=tcp", and with Solaris the "nfs://" implies that
15:07 julim joined #gluster
15:08 anoopcs joined #gluster
15:08 ndevos cornfed7_: okay, but at least some version of Solaris only use UDP for the MOUNT protocol, the actual NFS is over TCP
15:08 ndevos cornfed7_: you could capture a tcpdump and see if there are any portmap requests over UDP
15:09 ndevos cornfed7_: you would need to capture that on the Solaris system somehow though, I would not know how to do that
15:09 cornfed7_ i'll check that out.. snoop is how you do it on solaris
15:09 ndevos right, thats the tool name!
15:10 ndevos and wireshark can read those files without problems :)
15:11 ndevos cornfed7_: also see the note here: https://access.redhat.com/documentation/en-US/Re​d_Hat_Storage/3/html-single/Administration_Guide​/index.html#Manually_Mounting_Volumes_Using_NFS
15:13 cornfed78 I actually have the 'vers=3' and port options in the automount... for some reason though, it ignores it
15:13 cornfed78 thankfully, Solaris is on the way out here.. won't be too soon.
15:14 coredump joined #gluster
15:14 ndevos hmm, no idea how to specify it in the automount... I guess wireshark will help you in identifying what option does not get applied
15:15 jjakestr8 joined #gluster
15:16 jjakestr8 @semiosis, I'm about to install Gluster on AWS.  I found your PPA for 3.5.  Is there one for 3.6?
15:18 jjakestr8 PS - Hi :)
15:30 nbalacha joined #gluster
15:37 lpabon joined #gluster
15:49 JoeJulian ~ppa | jjakestr8
15:49 glusterbot jjakestr8: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:49 pkoro joined #gluster
15:49 semiosis i expect to put some more time into the automated package builds this week
15:52 calston joined #gluster
15:52 calston I've setup this Gluster 3.5 geo replication per the docs
15:53 calston it claims to start fine, then sits there like a retard saying "Not Started"
15:53 calston any ideas?
15:53 harish_ joined #gluster
15:53 calston Popen: ssh> bash: /usr/libexec/glusterfs/gsyncd: No such file or directory
15:53 calston hrm, pro stuff
15:55 calston that doesn't exist in the gluster packages
15:55 luis_silva joined #gluster
15:56 calston https://bugzilla.redhat.co​m/show_bug.cgi?id=1132766
15:56 glusterbot Bug 1132766: unspecified, unspecified, ---, glusterbugs, ASSIGNED , ubuntu ppa: 3.5 missing hooks and files for new geo-replication
15:56 calston ?!?
15:57 calston so TL;DR, the official Gluster package repo contains a "stable" release which is critically fucked for nigh on a year and no one has even so much as placed a warning about it
16:00 soumya joined #gluster
16:04 JoeJulian calston: Good point. Want to draft that? Where do you suggest it be put? Or better yet, can you help solve the problem? I'm sure our overworked ppa maintainer would welcome another volunteer besides himself.
16:05 hagarth joined #gluster
16:05 kshlm joined #gluster
16:06 JoeJulian But getting angry and swearing to the people that volunteer their time to help their fellow sysadmins for nothing more than a thank you is inappropriate.
16:07 calston if only the world trusted one another that I could simply push fixes
16:08 calston but hitting a problem that massive, which is known about for that long, is without question a reason to be angry
16:11 calston if you're going to endanger peoples production setups with dodgey packages, expect anger
16:12 JoeJulian endanger? there's no potential for lost data.
16:12 JoeJulian And I expect professionalism. If you cannot do that, come back later.
16:15 ildefonso joined #gluster
16:31 Pupeno joined #gluster
16:40 lifeofguenter joined #gluster
16:40 eberg joined #gluster
16:43 jjakestr8 Hi Semiosis, was your response directed at me?  If so, Ok.  I installed the 3.5 on AWS
16:43 semiosis just in general
16:44 raging-dwarf in order to run two servers in different LANs i connected my servers via VPN and did a local gluster mount which is exported via NFS to my clients (which are not able to talk to both servers) would you consider this an acceptable scenario for production usage?
16:46 jiffin joined #gluster
16:47 JoeJulian raging-dwarf: Before gluster added it's own native nfs service, there was problems resharing a gluster fuse mount via kernel nfs.
16:48 JoeJulian A better solution might be to add those nfs servers to the trusted pool. They will then run the gluster nfs service and your remote clients can use that.
16:49 raging-dwarf @JoeJulian: do you mean the nfs service provided by gluster?
16:49 JoeJulian yes
16:49 raging-dwarf i don't know the term trusted pool
16:49 raging-dwarf ok
16:50 JoeJulian When you peer probe you create a trusted pool.
16:50 raging-dwarf the last time i used the internal nfs service my gluster service stopped working
16:50 hamiller left #gluster
16:50 raging-dwarf but i'll dive into the docs
16:50 wushudoin joined #gluster
16:50 raging-dwarf thanks :)
16:50 JoeJulian I've never seen a bug report for something like that.
16:51 JoeJulian Doesn't mean there wasn't one, just that I haven't seen one.
16:51 bala1 joined #gluster
16:51 raging-dwarf i'll play with it and see what happens, i probably did something wrong
16:51 asku joined #gluster
16:52 raging-dwarf but from now on i'll stick to the docs and follow best-practices
16:52 JoeJulian :)
16:52 JoeJulian I like breaking rules. ;)
16:52 raging-dwarf haha, me too it's how you learn faster
16:53 raging-dwarf but at some time you need something that works properly
16:53 raging-dwarf since i'm close to deploying it on production i better stick to what's advised
16:53 JoeJulian good plan
17:01 todakure joined #gluster
17:05 theron joined #gluster
17:11 Rapture joined #gluster
17:16 lalatenduM joined #gluster
17:24 eberg #aws
17:26 hybrid5122 joined #gluster
17:26 harish_ joined #gluster
17:33 jiffin joined #gluster
17:35 gnudna high guys in a mirror setup why does the machine that did not crash show files as spit-brain auto-repair?
17:36 gnudna sorry but i am fairly new to gluster and the terminology
17:37 sputnik13 joined #gluster
17:37 uebera|| joined #gluster
17:39 lpabon joined #gluster
17:40 rwheeler joined #gluster
17:41 Pupeno joined #gluster
17:44 kovshenin joined #gluster
17:49 JoeJulian gnudna: use fpaste.org to paste up some examples of what you're asking.
17:50 gnudna JoeJulian will do when i get a chance in front of the machines
17:50 gnudna at work at the moment
17:50 gnudna assumed it was obvious what i was saying ;)
17:51 gnudna should have known better on my part
17:51 JoeJulian It's always more obvious in your own head. :D
17:52 gnudna hehe
17:52 gnudna i setup a mirror in the end equivalent to a network raid 1
17:52 gnudna or so i describe it
17:52 JoeJulian a replica volume, yes.
17:53 gnudna hehe that is th word i was looking for
17:53 gnudna anyways when i reboot 1 of the 2 machines when the second one comes back up i always have a split brain issue that in general auto-heals
17:54 gnudna but i would have thought the machine that stays up would be considered the clean copy
17:55 gnudna yet i see split-brain messages when i run gluster volume heal kvm-images info
17:55 JoeJulian "gluster volume heal $vol info" will report files that need or are in the process of being healed. "gluster volume heal $vol info split-brain" will show a log of files that have reported splitbrain since the last time glusterd was started.
17:55 JoeJulian The problem may be that you rebooted one machine, then rebooted the other before the heal was finished?
17:56 gnudna ster volume heal $vol info split-brain did not work with the 3.6.x packages on centos
17:56 gnudna gluster ^
17:56 JoeJulian Then why do you think you have split-brain?
17:56 gnudna JoeJulian that might be possible actually
17:56 gnudna the command i pasted above tells me i do ;)
17:57 gnudna this command gluster volume heal kvm-images info kvm-images being the vol
17:57 bene2 joined #gluster
17:57 JoeJulian hang on... did they change the output of the info command? Let me try creating a split-brain file.
17:57 jjakestr8 Hi, I've setup a new cluster, with 2 nodes. I have it in "distribute" config (i.e. not replica).  If I setup 3 client machines. do they each mount/talk to the head node?
18:01 harish_ joined #gluster
18:03 JoeJulian gnudna: oh, nice. They did change the output.
18:03 JoeJulian gnudna: Ok, check out ,,(split-brain)
18:03 glusterbot gnudna: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
18:04 JoeJulian jjakestr8: ,,(mount server)
18:04 glusterbot jjakestr8: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
18:04 JoeJulian there is not head node. All glusterfs servers are equal.
18:13 eberg joined #gluster
18:15 bene2 joined #gluster
18:27 theron joined #gluster
18:30 jjakestr8 JoeJulian - Thanks
18:32 sputnik13 joined #gluster
18:32 cfeller joined #gluster
18:32 jmarley joined #gluster
18:44 jjakestr8 @semiosis - Hi, whats the difference between: https://launchpad.net/~semiosis  and https://launchpad.net/~gluster ?  (other than https://launchpad.net/~gluster seems more up to date)
18:50 wushudoin joined #gluster
18:54 JoeJulian The correct one is ~gluster
18:54 jjakestr8 thanks
18:54 jjakestr8 Joe
18:54 jjakestr8 (of course I used the wrong one today)
18:54 JoeJulian I talked him in to changing it when my $dayjob was using ubuntu and I was going to help. Then we switched and I never followed through.
18:55 jjakestr8 that's the best way :)
18:55 JoeJulian Now, I think, he probably wants to stab me but luckily he's as far away as he can get and still be in the US.
18:55 JoeJulian Right semiosis?
18:56 kkeithley_ I think semiosis could go to Key West and you can go to Hawaii or Prudhoe Bay and be even further apart
18:56 jjakestr8 He's so mad at you he won't even talk to you
18:57 jjakestr8 newbie on IRC.  I'm usiing XChat on windows.  Is there a way to hide all the "this person joined, this person quit... messages)
18:58 JoeJulian Hawaii, maybe, but I'll need a COLA adjustment. No way am I moving to Alaska.
18:58 JoeJulian yes... and I did that myself in XChat and it was so long ago I don't remember how, I just keep copying my config from machine to machine...
18:58 semiosis LMAO
18:59 * semiosis is deleting the ~semiosis gluster PPAs now
19:01 jjakestr8 http://askubuntu.com/questions/356054/​xchat-how-to-hide-join-leave-messages
19:02 _polto_ joined #gluster
19:04 semiosis JoeJulian: i dont want to stab anyone
19:04 semiosis for the record
19:04 JoeJulian :D
19:12 jjakestr8 thanks for the help today guys.  I just (finally) got my Gluster up on AWS, 2 servers on client in a distributed config.  from my client I did "sudo cp -r /var/log /mnt/gluster"
19:12 jjakestr8 and it all worked :)
19:12 jjakestr8 some files ended up on node1 others on node2
19:14 SOLDIERz__ joined #gluster
19:14 jjakestr8 Next stop - I'm going to install postgres (our companies fork of postgresql) . and see if it will work on glusterfs
19:14 jjakestr8 sorry: our company's fork
19:24 _polto_ joined #gluster
19:25 plarsen joined #gluster
19:32 SOLDIERz__ joined #gluster
19:41 Philambdo joined #gluster
19:58 gildub joined #gluster
20:14 mbukatov joined #gluster
20:17 diegows joined #gluster
20:23 sputnik13 joined #gluster
20:28 T0aD joined #gluster
20:29 sputnik13 joined #gluster
20:43 lifeofguenter joined #gluster
20:58 p0licy am I missing something when using the RHEL6 repo? how do I get /var/lib/glusterd/hooks/1/create/pre installed?
21:05 gnudna left #gluster
21:07 kkeithley_ AFAIK /var/lib/glusterd/hooks/1/create/pre is created the first time you start glusterd
21:08 sputnik13 joined #gluster
21:08 ndevos I thought the RPMs package that dir and put some samba script in there?
21:09 JoeJulian yeah
21:09 JoeJulian $ rpm -q --whatprovides /var/lib/glusterd/hooks/1/create/pre
21:09 JoeJulian glusterfs-server-3.6.2-1.fc21.x86_64
21:10 JoeJulian p0licy: what version did you install?
21:10 p0licy glusterfs-server.x86_64                3.6.2-1.el6                  @glusterfs-x86_64
21:10 JoeJulian should use the same spec file
21:11 JoeJulian looking at the selinux bug?
21:12 p0licy Intstalling it on a AMI instance
21:12 kkeithley_ is that directory a %ghost?
21:12 * kkeithley_ looks for a BZ
21:12 JoeJulian kkeithley_: bug 1198406
21:12 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1198406 unspecified, unspecified, ---, mgrepl, ON_QA , SELinux is preventing /usr/sbin/glusterfsd from 'execute' accesses on the file /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh.
21:15 kkeithley_ and https://bugzilla.redhat.co​m/show_bug.cgi?id=1054847  is about %ghost directories being created by installing the rpm
21:15 glusterbot Bug 1054847: unspecified, unspecified, ---, kkeithle, ASSIGNED , rpm doesnt create all directories in /var/lib/glusterd
21:16 _br_ joined #gluster
21:17 Rapture joined #gluster
21:18 p0licy so how do I create the files and directories?
21:18 ndevos kkeithley_: huh? %ghost dirs getting created, sounds like a rpm bug
21:19 JoeJulian I didn't check just after installing. I've had it up and running so you're probably right.
21:19 kkeithley_ yeah, ISTR having a conversation with an rpm maintainer to admitted that %ghost is broken in different various ways in different versions.
21:20 p0licy so should I use fedora instead of rhel?
21:21 gildub joined #gluster
21:22 wushudoin joined #gluster
21:23 p0licy kkeithley_: so how do i create the files and directory if glusterfsd hasnt
21:23 JoeJulian p0licy: What's the problem you're trying to solve?
21:25 p0licy JoeJulian: I am trying to create a replicate volume and im getting an error
21:26 p0licy it complains about not such file or directory /var/lib/glusterd/hooks/1/create/pre
21:27 T3 joined #gluster
21:30 JoeJulian Can I see the error please?
21:31 p0licy JoeJulian: http://pastebin.com/3UD2yqUB
21:31 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:33 JoeJulian well that's just dumb...
21:33 JoeJulian why should it fail for that! Gah!
21:35 p0licy can I just create the directories manually?
21:35 bala joined #gluster
21:35 JoeJulian I don't see why not.
21:38 p0licy should I just create all of the %ghost dir in the spec file
21:38 edong23 joined #gluster
21:41 JoeJulian That's probably what I would do.
21:42 p0licy now Im getting a new error  brick2.mount_dir not present
21:43 ndevos please file a bug against glusterd, because missing empty directories should not cause a failure while creating/starting/... volumes
21:43 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
21:43 p0licy I am doing that now
21:44 ndevos thanks!
21:59 hybrid5122 joined #gluster
22:21 T3 joined #gluster
22:26 badone joined #gluster
22:27 rotbeard joined #gluster
22:28 sputnik13 joined #gluster
22:31 sputnik13 joined #gluster
22:40 calum_ joined #gluster
22:41 davidbitton joined #gluster
22:52 wushudoin| joined #gluster
22:57 shaunm joined #gluster
23:42 _zerick_ joined #gluster
23:55 p0licy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary