Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 sjoeboo_ joined #gluster
00:55 yinyin joined #gluster
01:02 puebele1 joined #gluster
01:10 _pol joined #gluster
01:12 Humble joined #gluster
01:19 lh joined #gluster
01:19 lh joined #gluster
01:24 yinyin joined #gluster
01:38 xymox joined #gluster
01:56 hagarth joined #gluster
02:00 _pol joined #gluster
02:05 glusterbot New news from newglusterbugs: [Bug 920890] Improve sort algorithm in dht_layout_sort_volname. <http://goo.gl/IHu2r>
02:11 sjoeboo_ joined #gluster
02:28 kkeithley1 joined #gluster
02:31 Oneiroi joined #gluster
02:35 jdarcy joined #gluster
02:40 morse joined #gluster
02:42 Humble joined #gluster
02:43 kevein joined #gluster
02:49 jdarcy joined #gluster
02:53 dustint joined #gluster
03:06 vshankar joined #gluster
03:06 ehg joined #gluster
03:21 bharata joined #gluster
03:34 anmol joined #gluster
03:36 glusterbot New news from newglusterbugs: [Bug 920372] Inconsistent ./configure syntax errors due to improperly quoted PKG_CHECK_MODULES parameters <http://goo.gl/ylPvK>
03:44 m0zes_ joined #gluster
03:54 Ryan_Lane joined #gluster
04:04 Humble joined #gluster
04:05 bulde joined #gluster
04:18 sahina joined #gluster
04:19 yinyin joined #gluster
04:24 _pol joined #gluster
04:29 sgowda joined #gluster
04:31 sripathi joined #gluster
04:40 hagarth joined #gluster
04:44 Troy joined #gluster
04:46 Troy left #gluster
04:46 Troy_ joined #gluster
04:46 Troy_ Hi guys!
04:47 Troy_ need little help here
04:47 Troy_ gsyncd initializaion failed
04:50 vpshastry joined #gluster
04:57 hagarth joined #gluster
04:59 Troy_ http://pastebin.com/Rb0RyBkz
04:59 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
05:00 Troy_ Paste #284413
05:01 Troy_ @paste #284413
05:01 mohankumar joined #gluster
05:02 Troy_ @paste
05:02 glusterbot Troy_: For RPM based distros you can yum install fpaste, for debian and ubuntu it's dpaste. Then you can easily pipe command output to [fd] paste and it'll give you an url.
05:04 Troy_ left #gluster
05:04 Troy_ joined #gluster
05:06 Troy_ http://fpaste.org/XUz4/
05:06 glusterbot Title: Viewing Paste #284415 (at fpaste.org)
05:06 Troy_ Hey guys .. any help please
05:06 glusterbot New news from newglusterbugs: [Bug 920916] socket connect is blocking <http://goo.gl/3NRtW>
05:06 yinyin joined #gluster
05:13 shylesh joined #gluster
05:14 rastar joined #gluster
05:18 bala joined #gluster
05:22 kkeithley_blr well, "connection to peer is broken" sounds like the ssh can't connect. Have you got a firewall (iptables) blocking ssh (port 22)?
05:23 kkeithley_blr on either the localhost machine or tmp-bkp?
05:23 _pol joined #gluster
05:26 hateya joined #gluster
05:27 Troy_ geo-replication --  connection to peer is broken
05:27 Troy_ its there
05:27 bharata_ joined #gluster
05:28 Troy_ I am able to ssh remote machine through command line
05:29 Troy_ even with stoped firewall, I am getting same error
05:30 yinyin joined #gluster
05:30 kkeithley_blr okay, so that's not it.
05:31 Troy_ firewall stoped on both end
05:31 Troy_ path to gsyncd is also correct
05:31 kkeithley_blr and tmp-bkp is in dns or /etc/hosts on the gluster server?
05:32 Troy_ I have these test machines in AWS
05:32 Troy_ added entries in /etc/hosts file
05:32 Troy_ <ip address> tmp-bkp
05:33 kkeithley_blr anything in the /var/log/secure on tmp-bkp?
05:33 Troy_ let me check
05:35 Troy_ Accepted publickey for root from 10.xxx.xxx.xxx port 41921 ssh2
05:35 Troy_ pam_unix(sshd:session): session opened for user root by (uid=0)
05:35 Troy_ Received disconnect from 10.xxx.xxx.xxx: 11: disconnected by user
05:35 Troy_ pam_unix(sshd:session): session closed for user root
05:35 Troy_ thats
05:36 Troy_ seems like connections are being established
05:40 kkeithley_blr an nothing interesting in the /var/log/gluster logs on tmp-bkp?
05:41 Troy_ I have mounted volume gv0 at /mnt
05:41 Troy_ and mnt.log is there
05:42 pranithk joined #gluster
05:43 lalatenduM joined #gluster
05:44 kkeithley_blr more interested in logs from /usr/libexec/glusterfs/gsyncd that might indicate whether it even started.
05:45 Troy_ where I can find those logs ?
05:50 satheesh joined #gluster
05:50 kkeithley_blr /var/log/gluster/
05:51 kkeithley_blr that's where they would be, if they exist
05:51 kkeithley_blr on tmp-bkp
05:51 raghu` joined #gluster
05:51 Troy_ on mnt.log file is there
05:52 Troy_ which I shared ..
05:52 Troy_ nothig else
05:53 Troy_ here it is http://fpaste.org/8naE/
05:53 glusterbot Title: Viewing Paste #284416 (at fpaste.org)
05:56 * m0zes ran into some fun tonight. don't ever do multiple renames of a dir when all of the servers aren't up for a distributed volume. tools->tools.new, tools.old->tools was enough to put that folder into a really wonky state on power up of the other server.
05:57 m0zes luckily I had backups of that volume. I think I need more detailed notes for my other admin ;)
06:00 aravindavk joined #gluster
06:06 Troy_ do I need to install gluster-server & geo-replication package on tmp-bkp ?
06:06 Troy_ on order geo-replication to work
06:07 Troy_ in order geo-replication to work
06:08 kkeithley_blr yes. I thought you said that /usr/libexec/glusterfs/gsyncd was there on tmp-bkp
06:09 Troy_ yes, it was there
06:09 Troy_ oh GOD .. my bad ..
06:10 Troy_ stupid me :P
06:10 Troy_ now a different error ... I can see geo-replicated files in /backup/gv0
06:10 Troy_ but log says
06:10 Troy_ [socket.c:1715:socket_connect_finish] 0-glusterfs: connection to  failed (Connection refused)
06:12 Troy_ please excuse me, I am taking too much time
06:12 Troy_ http://fpaste.org/a6PL/
06:12 glusterbot Title: Viewing Paste #284419 (at fpaste.org)
06:12 Troy_ here is the log now
06:18 Troy_ hey kkeithley_blr .. thank you so much for your response ..
06:18 Troy_ really appreciate it ...
06:18 Troy_ it looks like .. I did it totally wrong .. everything is working just fine now
06:19 kkeithley_blr you're welcome. you still have firewall stopped, right?
06:19 Troy_ thank you .. & have a nice day
06:19 kkeithley_blr oh, okay, good news
06:19 Troy_ I turned them on .. and everything is still working :)
06:19 kkeithley_blr good
06:19 kkeithley_blr glad you got it working
06:20 Troy_ my mistake man .. but do appreciate you for guiding me in right direction
06:20 Troy_ have a very good night bro
06:23 Troy_ left #gluster
06:27 twx joined #gluster
06:46 balunasj|mtg joined #gluster
06:46 guigui joined #gluster
06:49 jcapgun joined #gluster
06:49 jcapgun Hello folks
06:50 jcapgun was wondering if anybody could help me with a distributed replicate problem i'm having with GlusterFS at the moment
06:50 jcapgun where it's sooooo slow to read
06:51 jcapgun causing major issues with our app at the moment :(
06:51 vimal joined #gluster
06:56 jcapgun I've got 6 bricks configured
06:57 jcapgun and each gluster server is actually connecting as a client as well
06:57 Nevan joined #gluster
07:01 jcapgun this is he current configuration.
07:01 jcapgun http://dpaste.com/1021405/
07:01 glusterbot Title: dpaste: #1021405: gluster info, by joe (at dpaste.com)
07:03 jcapgun I'm really not too sure what's going on, but even doing a simple "ls" in a subdirectory of the volume is extremely, extremely slow.
07:04 jcapgun JoeJulian - hopefully you'll b online soon to provide some consultation. :)
07:07 vpshastry joined #gluster
07:13 aravindavk joined #gluster
07:14 aravindavk joined #gluster
07:16 test_ joined #gluster
07:21 displaynone joined #gluster
07:25 jtux joined #gluster
07:39 dobber_ joined #gluster
07:40 ngoswami joined #gluster
07:41 mohankumar joined #gluster
07:45 ctria joined #gluster
08:05 displaynone joined #gluster
08:07 andreask joined #gluster
08:11 jcapgun I'm also seeing errors like this in the log: http://dpaste.com/1021445/
08:11 glusterbot Title: dpaste: #1021445: Errors in log 1, by joe (at dpaste.com)
08:13 jcapgun additional errors:  http://dpaste.com/1021447/
08:13 glusterbot Title: dpaste: #1021447: Errors in log 2, by joe (at dpaste.com)
08:14 jcapgun http://dpaste.com/1021448/
08:14 glusterbot Title: dpaste: #1021448: Errors in log 3, by joe (at dpaste.com)
08:21 pranithk jcapgun: Seems like there was a disconnect on the first brick and now self-heals are happening which lead to the performance problem...
08:21 jcapgun ah, ok
08:21 jcapgun and how long does that usually take
08:21 jcapgun ?
08:22 jcapgun i see a bunch of errors currently in the glustershd log file
08:23 jcapgun http://dpaste.com/1021457/
08:23 glusterbot Title: dpaste: #1021457: glustershd.log file, by joe (at dpaste.com)
08:43 vshankar_ joined #gluster
08:43 shishir joined #gluster
08:43 test_ joined #gluster
08:43 lala_ joined #gluster
08:43 pranithk_ joined #gluster
08:44 bulde1 joined #gluster
08:44 shireesh_ joined #gluster
08:44 kkeithley1 joined #gluster
08:45 bala1 joined #gluster
08:45 vpshastry1 joined #gluster
08:45 rastar1 joined #gluster
08:47 hagarth joined #gluster
08:47 anmol joined #gluster
08:56 rastar joined #gluster
08:56 kkeithley1 joined #gluster
08:56 bulde joined #gluster
08:56 vpshastry joined #gluster
08:56 anmol joined #gluster
08:56 lala_ joined #gluster
08:56 hagarth joined #gluster
08:56 spai joined #gluster
08:56 shireesh_ joined #gluster
08:56 vshankar joined #gluster
08:57 bala joined #gluster
08:57 shishir joined #gluster
08:58 shylesh joined #gluster
08:58 ngoswami joined #gluster
08:58 wica_ Hi, is it possible to clean up the .glusterfs dir ?
08:59 wica_ Why I ask this, is because I see files from more then 200MB in that dir.
09:00 NuxRo wica_: that directory is very important, do not touch it
09:00 vimal joined #gluster
09:00 wica_ NuxRo: I understand it is very important :)
09:00 pranithk joined #gluster
09:00 wica_ Can I read somewhere whate the file in this dir mean?
09:02 rastar joined #gluster
09:04 mooperd joined #gluster
09:04 Humble joined #gluster
09:08 andreask wica: http://www.gluster.org/2012/09/what-is​-this-new-glusterfs-directory-in-3-3/
09:08 glusterbot <http://goo.gl/sOR50> (at www.gluster.org)
09:08 lge :)
09:09 sahina joined #gluster
09:09 aravindavk joined #gluster
09:09 wica andreask: Thnx :)
09:17 wica andreask: When I read it correct. Every file in .glusterfs shout be a symlink ?
09:20 JoeJulian No, every directory is a symlink. Every file is a hardlink.
09:22 shireesh_ joined #gluster
09:22 wica So this is normal?
09:22 wica 12G.glusterfs/c2
09:22 wica 18G.glusterfs/d3
09:24 ndevos yes, the files are hard-links, meaning they point to the exact same data as the files under the non-.glusterfs directory
09:24 puebele1 joined #gluster
09:25 ndevos if you modify the contents of a file under the .glusterfs directory , the contents of the file with user-friendly filename will have the same change
09:26 wica ndevos: Thnx, used read the diff between hard and softlink.
09:26 ndevos it's like one contents, two filenames (which makes healing easier)
09:26 wica Yep, I understand that now :)
09:26 ndevos :)
09:27 sripathi joined #gluster
09:28 gbrand_ joined #gluster
09:31 eryc_ joined #gluster
09:32 johndesc2 joined #gluster
09:32 hagarth_ joined #gluster
09:33 redbeard joined #gluster
09:35 bdperkin_ joined #gluster
09:42 Shdwdrgn joined #gluster
09:42 jclift joined #gluster
09:42 jag3773 joined #gluster
09:43 bharata_ joined #gluster
09:43 jiffe98 joined #gluster
09:43 verdurin joined #gluster
09:43 satheesh joined #gluster
09:44 joeto joined #gluster
09:47 sgowda joined #gluster
09:52 nueces joined #gluster
10:02 cw joined #gluster
10:04 bulde1 joined #gluster
10:06 aravindavk joined #gluster
10:09 ProT-0-TypE joined #gluster
10:21 samppah @yum repo
10:21 glusterbot samppah: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
10:27 sripathi joined #gluster
10:35 yinyin joined #gluster
10:39 Nevan1 joined #gluster
10:39 glusterbot New news from newglusterbugs: [Bug 921024] Build process not aware of --prefix directory <http://goo.gl/uUplT>
10:41 Slydder joined #gluster
10:41 Slydder hey all
10:42 jcapgun joined #gluster
10:44 jcapgun Hey folks -
10:45 jcapgun did anybody have a chance to take a peak at my log entries I dpasted earlier? :)
10:49 yinyin_ joined #gluster
10:50 Staples84 joined #gluster
10:51 JoeJulian jcapgun: No errors in there.
10:51 jcapgun hey man!
10:51 jcapgun how's it going?
10:52 JoeJulian It's almost 4am and I'm awake.
10:52 jcapgun wow
10:52 jcapgun sorry :(
10:52 jcapgun that is no good
10:53 JoeJulian @meh
10:53 glusterbot JoeJulian: I'm not happy about it either
10:53 JoeJulian Probably won't be much longer though.
10:54 JoeJulian So "I" is an info level message. You're mostly concerned with "E"
10:54 jcapgun yeah
10:54 jcapgun understood
10:54 jcapgun just was in a panic because the volume is basically useless :(
10:55 JoeJulian I'd make sure I knew how I got in a self-heal situation, just in case something's broken.
10:55 jcapgun to be honest - i'm not even sure
10:55 jcapgun if a server is rebooted for one reason or another, that would not put it into a self heal would it?
10:55 jcapgun or would it
10:55 jcapgun i'm not entirely sure
10:56 JoeJulian Sure. Anything that changes will have to get updated.
10:56 jcapgun ok
10:56 jcapgun one thing I need to ask.  We're using a distributed replicate volume
10:56 jcapgun if one server goes down, the data should still be available correct?
10:57 JoeJulian yes
10:57 jcapgun hmmm
10:57 jcapgun it seems something happens when one brick goes down
10:58 jcapgun last week, i noticed when I did an "ls -la' on the volume, i saw some questions makrs
10:58 jcapgun *marks
10:58 jcapgun some folders were not available and could not do anything with the volume
10:59 jcapgun when i looked at gluster volume status, one server was not online
11:00 jcapgun which is why i'm curious the volume was hosed.
11:01 JoeJulian I would have looked in the client log at that point and see what it said.
11:03 jcapgun ok
11:05 jcapgun so in your experience - how long does a self heal take typically?
11:05 jcapgun i guess it all depends eh
11:06 JoeJulian Depends on how many files were touched during the downtime, and how large the files are.
11:07 jcapgun lots of small files I think.  I didn't think the downtime was very long at all.  Unless it's been down and I didn't know about it.  I mean I do have monitoring set up on this.
11:07 jcapgun definitely weird
11:09 JoeJulian That's what my guess would have been.
11:09 jcapgun ok
11:09 jcapgun so i guess i'll just be patient and wait for this self heal to complete
11:10 JoeJulian logstash ftw
11:10 jcapgun hope it's not days :)
11:10 manik joined #gluster
11:10 jdarcy joined #gluster
11:10 JoeJulian You can adjust the number of simultaneous self-heals
11:11 jcapgun ok
11:11 jcapgun i'll read up on how to do that
11:17 jtux joined #gluster
11:18 jcapgun would be this setting Joe?
11:18 jcapgun cluster.self-heal-window-size
11:21 jdarcy joined #gluster
11:22 lpabon joined #gluster
11:24 JoeJulian cluster.background-self-heal-count
11:26 JoeJulian default is 16
11:27 jcapgun ok
11:27 jcapgun what do you recommend that setting be?
11:28 JoeJulian Whatever works for your performance requirements.
11:28 jcapgun ok
11:28 jcapgun yeah i'm reading an old log
11:28 jcapgun irclog
11:28 jcapgun and you're in it :)
11:29 JoeJulian Hehe
11:29 JoeJulian I need to get a life...
11:29 jcapgun haha
11:30 jcapgun or, you can continue helping people and feel good about yourself
11:30 jcapgun because without you i'm pretty sure some people (like me), would be f'd
11:30 pkoro joined #gluster
11:32 jcapgun so i'm going to do: gluster volume set gv_savedata cluster.background-self-heal-count 30
11:32 jcapgun as an example
11:32 JoeJulian Perhaps, but your initial complaint was about speed...
11:33 aravindavk joined #gluster
11:33 JoeJulian So maybe scale back to 8?
11:33 jcapgun yes
11:33 jcapgun aaaah, i see
11:33 jcapgun well two things.  The volume is unusable right now because it's fixing itself
11:33 jcapgun but
11:34 jcapgun i want the self heal to finish faster
11:34 jcapgun it's been going on all day...
11:39 jcapgun this doesn't seem very good.
11:40 jcapgun http://dpaste.com/1021615/
11:40 glusterbot Title: dpaste: #1021615: warning, by joe (at dpaste.com)
11:43 duerF joined #gluster
11:44 puebele1 joined #gluster
11:45 jcapgun and a message about another crawl already in progress
11:47 jcapgun ok, I am seeing a bunch of errors in the log now though, so maybe I do need to be concerned...
11:48 jcapgun http://dpaste.com/1021624/
11:48 glusterbot Title: dpaste: #1021624: errors, by joe (at dpaste.com)
11:49 jcapgun i'll be right back.  I've got to reboot my machine...
11:50 sgowda joined #gluster
11:52 pmuller_ joined #gluster
11:52 jcaputo joined #gluster
11:53 pmuller_ hello
11:53 glusterbot pmuller_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:53 jcaputo back
11:54 go2k joined #gluster
11:56 go2k Hey guys, I'm testing gluster's replication and noticed that when I "plug out" the cable from one node the mounted FS is not accessible for around 2 minutes (I do ls and it hangs until some timeout...). How do I decrease that timeout?
11:59 Slydder hey all.
11:59 go2k hi there
11:59 edward1 joined #gluster
12:01 Slydder I am setting up 2 gluster servers in 2 openvz containers. I will be using corosync/pacemaker to switch the active IP in case one of the main gluster server dies so that the second one takes over. however I am not sure how to go about getting them both to use the same brick for the replication. anyone got a suggestion?
12:05 JoeJulian joined #gluster
12:06 JoeJulian ~ping-timeout | go2k
12:06 glusterbot go2k: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
12:08 ndevos Slydder: "use the same brick for the replication" would not replicate, you need two different bricks for that
12:09 stickyboy Hmm, updated a CentOS client from 6.3 -> 6.4 and now I can't ls my fuse mount.    `ls /home`   turns into a zombie.
12:09 saurabh joined #gluster
12:09 Slydder ndevos: I know that. this is the server end. the clients will have a local brick for replication. I only need the servers to use the same brick but only one server will be active at a time.
12:10 stickyboy I saw some AVC denials for sshd and fuse, but I can't even ls the mount with selinux disabled.
12:11 saurabh joined #gluster
12:12 ndevos Slydder: sharing a filesystem on the same blockdevice is something gfs (global filesystem) and ocfs do, glusterfs is not commonly used with shared blockdevices
12:13 Slydder well they won't be sharing it. only 1 server will be active at a time. if both servers were actually active then it would be sharing.
12:15 saurabh joined #gluster
12:17 sripathi joined #gluster
12:17 andreask joined #gluster
12:20 rastar joined #gluster
12:23 stickyboy Something is seriously wrong with CentOS 6.4... I wish I hadn't updated. :\
12:27 jcaputo so @JoeJulian, this error, is it anything to be concerned about?
12:27 go2k JoeJulian: thanks a lot :)
12:28 JoeJulian jcaputo: I don't think so
12:29 jcaputo ok
12:29 jcaputo i'll let this thing do it's thing all night
12:29 jcaputo and i'll check back in the morning
12:29 jcaputo at this time, the volume is still unusable, and one node is going crazy with it's self healing
12:32 kevein_ joined #gluster
12:32 Slydder ok. got it working
12:32 go2k JoeJulian: that timeout is annoying because if I have 2 bricks replicated over internet and one of them is down then I want to automatically switch to the second node and allow writing straight away
12:32 go2k but maybe these 42 seconds are worth it...
12:32 JoeJulian Your bigger problem if you're replicating over the internet is probably going to be latency.
12:32 joeto joined #gluster
12:33 go2k yes, but in my case big guys from the top said that's "acceptable"
12:33 go2k and I'm just the guy who execute orders lol
12:33 go2k but I am not using geo-replication because it's one way
12:34 JoeJulian network.ping-timeout is the setting. We always recommend not adjusting it.
12:35 go2k ok, thanks, will try to play with it now
12:37 jdarcy joined #gluster
12:38 gbrand_ joined #gluster
12:39 go2k JoeJulian: if I created my replica manually using commands, do I set that parameter also in "gluster" command line or is there any specific configuration file somewhere
12:43 puebele1 joined #gluster
12:44 go2k ok found it
12:45 dustint joined #gluster
12:49 bennyturns joined #gluster
12:51 jcaputo Thanks again Joe - your help again is really appreciated
12:54 lalatenduM joined #gluster
13:00 rob__ joined #gluster
13:01 saurabh joined #gluster
13:05 robos joined #gluster
13:08 shylesh joined #gluster
13:09 Skunnyk Hi all
13:12 Skunnyk on gluster 3.3, if I need to replace a server (crasher server), if I follow this documentation : http://gluster.org/community/document​ation//index.php/Gluster_3.2:_Brick_R​estoration_-_Replace_Crashed_Server , when my 'new" brick will be online, gluster will start to rereplicate files from other node, but will I have big performance issues as each files acceded by client will be "force" replicated ?
13:12 glusterbot <http://goo.gl/IqaGN> (at gluster.org)
13:12 yinyin joined #gluster
13:13 kkeithley1 joined #gluster
13:13 Skunnyk or replication will be in background ?
13:14 jtux joined #gluster
13:15 Skunnyk because my application read lot of files on gluster, so I think I'll have latence/perf problem, like explain on http://joejulian.name/blog/replacin​g-a-glusterfs-server-best-practice/ (This idea sucked.) :þ
13:16 glusterbot <http://goo.gl/pwTHN> (at joejulian.name)
13:20 rcheleguini joined #gluster
13:23 jclift JoeJulian: Skunnyk ^^^
13:24 Nevan joined #gluster
13:30 lalatenduM joined #gluster
13:33 dustint joined #gluster
13:39 sahina joined #gluster
13:40 rwheeler joined #gluster
13:46 NeatBasis joined #gluster
13:46 aliguori joined #gluster
13:51 dustint joined #gluster
13:52 ctria joined #gluster
13:52 dustint joined #gluster
13:53 Skunnyk :)
13:54 Philip__ joined #gluster
14:00 Skunnyk ok, seems to be what I thought, I remove a server, add a new 'blank' with the same uuid, and now this take a looong time to do some ls / find / cat etc because it replicate all files on the new bricks, so on a production environment, this will break the application… or I miss something :)
14:03 semiosis Skunnyk: replication happens in the background but it is aggressive by default, which will impact performance.  you can adjust how aggressive the background healing is with volume options.
14:04 semiosis Skunnyk: some people (myself included) get better performance setting the heal algorithm to full (default is diff)
14:05 semiosis Skunnyk: also try setting the number of parallel background heals to something low... cluster.background-self-heal-count: 2 <-- what i use
14:05 stickyboy semiosis: Hmm, diff is more intensive than full?
14:05 stickyboy I'll remember that when I get my gluster deployed...
14:05 semiosis stickyboy: it depends on your use case and hardware
14:05 stickyboy semiosis: Gotcha.
14:06 semiosis stickyboy: for me files dont get modified, just created & read, so diffing them isn't worthwhile.
14:06 semiosis stickyboy: if you had vm images for example, large files with lots of random io, then maybe diff would be better
14:07 Skunnyk hum, ok
14:07 Skunnyk I have small files
14:08 Skunnyk :)
14:08 Skunnyk so i'll try
14:08 Slydder I am trying to get gluster to bind to a specific IP (have 2 ips one of which is a failover IP which should be used for gluster) I have set "option transport.socket.bind-address" to the ip it should bind to but now I am getting rdma errors. any ideas?
14:08 semiosis Slydder: dont do that :)
14:09 semiosis Slydder: editing volfiles is strongly discouraged
14:09 semiosis Slydder: use ,,(hostnames) and map to the IP you want to use
14:09 glusterbot Slydder: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
14:09 Slydder I have no volumes atm. just a blank install.
14:09 Humble joined #gluster
14:10 Slydder I have hostnames in the /etc/hosts file. and they work. the problem is that when gluster starts it binds to the wrong IP and those when I do a peer probe it get the entry for the wrong address.
14:11 stickyboy semiosis: regarding heal algorithm, thanks!  I'll remember that.
14:11 semiosis Slydder: add a host route
14:11 semiosis stickyboy: yw
14:11 Slydder what?
14:12 Slydder do you mean a routing table entry? if so there is not need for that because all peers can find the gluster server just fine.
14:12 semiosis Slydder: glusterfs binds to all IPs
14:12 semiosis Slydder: is the problem that when you probe from server A to B, then B gets the wrong IP for A?
14:13 semiosis Slydder: or do i misunderstand?
14:16 Slydder the server with multiple IP's gets the second IP when corosync passes the resource to the node. which means up until that point the main/only IP for the server is the one that should NOT be used for gluster. once gluster starts it binds to all IP's and when it sends out it will go out over the first IP instead of the second (both are in the same subnet). which is why I want to bind gluster to only the failover IP.
14:17 semiosis Slydder: use ,,(hostnames)
14:17 glusterbot Slydder: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
14:17 semiosis Slydder: then you can map the hostname to whichever IP you want
14:17 Slydder I only use hostnames. never IP's
14:17 semiosis that should solve the problem
14:18 semiosis you can't restrict gluster to binding to a specific IP, sorry
14:19 Slydder which means I have to probe from the client nodes instead of the main node. ok.
14:19 Slydder a bit of a pain but hey.
14:20 semiosis probe from client?  that's highly unusual
14:20 semiosis ,,(glossary)
14:20 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
14:23 wushudoin joined #gluster
14:23 Slydder semiosis: it is actually 2 openvz containers acting as the server for a single brick. if one container dies the other comes online and takes over the brick. gfs1 and gfs2 have their own IP's but also get a second IP from corosync which is reachable as gfs.
14:24 semiosis Slydder: why on earth would you want to do that?
14:24 semiosis :)
14:24 semiosis Slydder: sounds very complicated & does it provide any benefits over glusterfs replication?
14:25 Slydder I am runnig raid 1 on all client nodes. using gluster replication to "gfs" which is a raid 10 system.
14:26 Skunnyk semiosis, cluster.background-self-heal-count doesn't seems to be in documentation oO
14:26 Slydder so even if gluster does decide to depart I still have local copies until I can get gluster back up on the server. with corosync I get failover for "gfs" IP and glusterfs-server across 2 containers.
14:27 Slydder so if gluster does die for some reason not having to do with the brick then the other container starts gluster and keeps going until I can take a look at the install that died.
14:28 JoeJulian @mount host
14:29 JoeJulian @mount server
14:29 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
14:30 ndevos Slydder: I fail to see the advantage in sharing the brick and exporting that over glusterfs - what functionality does glusterfs offer you that you want to use?
14:30 ndevos plain nfs sounds way easier if you your storage is shared already...
14:31 JoeJulian I'm still trying to figure out his whole raid thingamajiggy. Is he sharing files that he's doing raid with loopback devices?
14:31 Slydder I am replicating to multiple client nodes with a HA gluster server is all.
14:32 JoeJulian GlusterFS replication = HA. Adding other HA management apparently just adds confusion
14:32 Slydder just that the HA gluster server uses a raid 10 brick is all.
14:33 ndevos Slydder: but whats the thing with multiple servers that manage a brick (in some fail-over mode)?
14:33 JoeJulian So you only have 1 server?
14:33 Skunnyk well, so in case of hardware fail, if I remplace the server, I just need to wait some hours before reuse my gluster (waiting for re-replication) ? :/
14:34 Skunnyk I have poor performance during "recovery", even if I don't launch a self heal on replicate
14:34 Slydder correct 1 server is active. the second sits and waits in case server 1 dies. if it does then corosync does a failover of the IP and gluster service. and everything is running again.
14:34 ndevos Slydder: and what purpose/advantage does gluster give you?
14:35 * JoeJulian goes back to sleep
14:35 * ndevos needs a coffee
14:35 Slydder ndevos: hmmm. not sure. maybe local copies of data on large shares instead of slow nfs access which can swamp a segment.
14:36 Slydder am currently replicating ca. 3 TB of data across 12 nodes (excluding the server).
14:37 JoeJulian replicating 3TB across (and here's the confusion) 12 of something that isn't servers.
14:37 ndevos Slydder: glusterfs is intended for a scale-out environment, it is supposed to run on multiple servers to provide that feature... running one glusterfs server does not fit in the picture
14:38 JoeJulian replication, in the context of glusterfs, is something that's done between servers.
14:39 jbrooks joined #gluster
14:39 tqrst is "df -i" supposed to be accurate for mounted gluster volumes? IUsed seems very high.
14:39 Slydder correct. however, it is easier to explain my setup this way because the gfs server is holding multiple bricks that are being accessed by certain client nodes (servers if you wish).
14:41 JoeJulian It's not a wish thing. There's processes that determine a server. There's a mounted glusterfs volume that determines a client. They can be both.
14:41 ndevos Slydder: your "gfs server" is running glusterd and all?
14:41 Slydder correct. that is the order in which the volume is created.
14:43 Slydder the /var/lib/glusterd directory is a mount bind which is shared across the 2 containers. thus both gluster servers are using the same ID and vol configs.
14:43 Slydder which is why only 1 is active at a time.
14:43 Rocky_ joined #gluster
14:44 JoeJulian Slydder: fpaste.org the output of "gluster volume info {vol}" for one of your volumes.
14:44 Slydder and 8 of the client nodes are also HA corosync nodes.
14:47 bugs_ joined #gluster
14:49 JoeJulian ?
14:50 semiosis Skunnyk: since glusterfs 3.3.0 you dont need to trigger a self-heal, there is a self-heal daemon which does it proactively.  set the background self heal count to something small like 2 to reduce the performance impact.
14:52 Skunnyk semiosis, hum ok, so it's only glusterhd which look if replication is ok when I do a "ls -l" on a folder/file etc
14:52 semiosis Skunnyk: sorry i dont understand what you just said
14:52 Skunnyk sorry
14:53 Skunnyk glusterfshd (health daemon) do backgrount self heal
14:53 Skunnyk background*
14:53 semiosis yes
14:53 JoeJulian It does *and* any files your client touches gets checked *as well*.
14:53 Skunnyk ok, so from client too
14:54 Skunnyk so if replace a dead server with a new server
14:54 Skunnyk I can't do a "background" replication on new server
14:54 JoeJulian Then just do "gluster volume heal $vol full"
14:54 Skunnyk hum
14:55 Skunnyk so here, glusterfshd will replicate all needed file
14:55 Skunnyk s
14:55 Skunnyk but when client access to data too ?
14:55 Slydder JoeJulian: http://pastie.itadmins.net/Jd56Grv1
14:55 glusterbot Title: Untitled - IT Admins Pastebin (at pastie.itadmins.net)
14:56 semiosis Skunnyk: it checks the files and queues for background healing when it finds a file that is not in sync
14:56 JoeJulian Slydder: And this volume would be mounted on host test1 I presume?
14:56 Slydder correct
14:57 Skunnyk yes, but if I try to access a file with the client , it will be instantly replicated ?
14:57 Skunnyk not queued ?
14:57 Skunnyk right ?
14:57 semiosis Skunnyk: queued also
14:57 JoeJulian On the host, test1, if you ping gfs it pings the floating ip?
14:57 semiosis Skunnyk: if it needs to be healed
14:57 Slydder so. and just shutdown gfs2 and gfs1 fired up and the volume is available
14:57 Slydder JoeJulian: correct
14:58 JoeJulian Slydder: Then your original question seems moot. The client (test1) retrieves that volume definition and connects to the ip addresses resolved from those hostnames.
14:59 Slydder yes. because I did the peer probe from the client node.
14:59 JoeJulian Aha!
14:59 Skunnyk semiosis, ok, so if I run volume heal $vol full after the setup of a new server , all needed operation will be queued
14:59 Slydder using the gfs hostname
14:59 JoeJulian So you followed the directions. :P
15:00 JoeJulian An interesting application. One server with 8 bricks, and 8 servers with 1 brick. Creating 8 replica 2 volumes of 2 bricks each.
15:00 Slydder the problem is that I have to disable the original IP on the client node so that the peer probe goes over the correct IP to gfs otherwise is doesn't use the floating IP.
15:01 Slydder JoeJulian: correct
15:01 Slydder all client nodes are raid 1 and the main server is raid 10
15:01 JoeJulian Can you use different subnets for the original ip and the floating ip?
15:02 zetheroo joined #gluster
15:02 daMaestro joined #gluster
15:02 semiosis Skunnyk: yes
15:02 Slydder not really. was looking into doing that but it would require a complete restructuring of the subnets, vpns and so on.
15:02 JoeJulian If they're on the same subnet, the ip stack will choose whichever route it finds first, unless you weight your routes.
15:03 JoeJulian The other option, of course, is to weight your routes.
15:03 Slydder which I could theoretically do with corosync on failover
15:03 Slydder nice idea.
15:03 zetheroo if I do 'gluster peer probe server2" from server1 it's successful, but if I do "gluster peer probe server1" from server2 it's unsuccessful ....
15:03 zetheroo Probe returned with unknown errno 107
15:03 Slydder zetheroo: check your hosts file
15:03 Slydder /etc/hosts
15:04 zetheroo both servers have identical /etc/hosts file
15:05 JoeJulian Check /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
15:05 lpabon joined #gluster
15:05 JoeJulian Also make sure it's not iptables or selinux
15:06 balunasj joined #gluster
15:06 zetheroo ok wait ... I should say that both server are running the same OS and that they were both setup the same way ...
15:06 _pol joined #gluster
15:06 JoeJulian Were they ,,(cloned servers)
15:06 glusterbot Check that your peers have different UUIDs ('gluster peer status' on both). The uuid is saved in /var/lib/glusterfs/glusterd.info - that file should not exist before starting glusterd the first time. It's a common issue when servers are cloned. You can delete the /var/lib/glusterfs/peers/<uuid> file and /var/lib/glusterfs/glusterd.info, restart glusterd and peer-probe again.
15:06 zetheroo no not cloned ... just setup the same
15:07 tjstansell anyone have an idea what the size of the active glusterfs dev team is? i'm just curious how many folks there are working on bugs, fixes, etc as their daily routine (as opposed to community folks who do this on the side)
15:07 JoeJulian Most are around 5' 10"
15:07 johnmark lol
15:07 tjstansell haha
15:07 tjstansell nice
15:07 johnmark tjstansell: we have about 30 devs total
15:08 johnmark some of them drop in the channel
15:08 johnmark usually jdarcy, kkeithley_blr, hagarth_, and avati
15:08 tjstansell yeah, those are familiar names :)
15:08 JoeJulian pranithk's been making regular appearances too
15:09 tjstansell are these devs dedicated to glusterfs? or are they part of a generic pool that works on various projects?
15:09 semiosis y4m4 even made a cameo recently
15:09 Staples84 joined #gluster
15:09 Slydder so guys. thanks for letting me bother you with my probs. happy all is working as it should be. now have to start the documentation for this setup. lol.
15:10 Slydder later all.
15:10 zetheroo ok it was /etc/resolv.conf that was the issue ... needed to add 'search domain.local' :)
15:10 tjstansell as "unknown" as the process is for getting backports submitted and the various frustrations around all that, i've been pretty happy to get a fix for my bug as quickly as it did.  and now just waiting for a review for the backport, which is great.
15:11 tjstansell certainly not something you'd see very often outside of the open source world.
15:12 JoeJulian Unless you're Boeing...
15:12 JasonG I have two identical centos 6.3 clients connected to the same storage1.example.com:/v0...if i delete a file from client1 the file goes away if i delete a file from client2 the file goes away briefly and then comes back
15:12 JoeJulian And you're deleting from the client mountpoint, not the brick, right?
15:13 JasonG right
15:13 JasonG not touching the bricks directly at all
15:13 JasonG only to do an ls to see if the files are present or not
15:13 Skunnyk well, I've launch a  volume heal $vol full :)
15:15 Skunnyk ok it just create empty files on new server when I access to the folder from client
15:18 hybrid512 joined #gluster
15:18 JasonG JoeJulian: that is not a normal thing correct?
15:18 zetheroo I have 2 x 3TB disks - brand new and completely untouched ... can I make them a single volume? ... or brick? or are they each a brick? ... hmm ....
15:18 Skunnyk hum, but i have lot of errors in glusterhd.log, like :
15:18 Skunnyk E [afr-self-heald.c:685:_link_inode_update_loc] 0-nv-cluster-replicate-0: inode link failed on the inode (00000000-0000-0000-0000-000000000000)
15:18 Skunnyk o
15:18 Skunnyk oO
15:19 JoeJulian JasonG: Not normal. Check the client log on the misbehaving client for errors.
15:19 JoeJulian Skunnyk: Which would be accurate. That's an invalid gfid.
15:22 Skunnyk hum ok
15:22 Skunnyk 0-nv-cluster-replicate-0: open of <gfid:56d99fa9-78bb-438a-9006-842a992aba78> failed on child nv-cluster-client-1 (No such file or directory)
15:22 Skunnyk o/
15:22 Skunnyk oh and JoeJulian ; thank you for your blog, very useful :)
15:22 JasonG JoeJulian: found the culprit fat fingered the hostname for the DNS in my /etc/hosts
15:23 JoeJulian Skunnyk: Thank you.
15:23 JoeJulian JasonG: Interesting, glad you found it.
15:24 Humble_afk joined #gluster
15:24 zetheroo trying to follow the gluster documentation  ... but not finding anything which would tell me how to get 2 HDD's to be a single volume ...
15:24 JasonG so i'm guessing that it has to be able to resolve all bricks to be able to clearly delete a file good to know in the future
15:24 bdperkin joined #gluster
15:24 zetheroo is that something that has to be configured aside from GlusterFS altogether?
15:24 zetheroo like with LVM or something .. ?
15:25 Nagilum what happens when there are already files on a mountpoint that is added as a brick?
15:25 JoeJulian That's one way.
15:25 semiosis zetheroo: you have many options.  you can make each disk a brick, simply by formatting (xfs recommended) and mounting, then giving the path in your volume create command
15:25 semiosis zetheroo: or you can use lvm/mdadm/raid/whatever to combine the disk block devices & then do the rest
15:26 JoeJulian Nagilum: If it's the left-side of a replica pair, it'll be integrated into the volume.
15:26 zetheroo semiosis: but if I make each disk a brick, can the two bricks be made to be seen by the host OS as a single disk?
15:26 JasonG semiosis: is there any performance degradation by using gluster on LVM?
15:27 JoeJulian Only if you're keeping snapshots around.
15:27 semiosis zetheroo: glusterfs client mount does that
15:27 Nagilum JoeJulian: Distributed-Replicate
15:28 zetheroo semiosis: so glusterfs client can make it so that the two bricks are treated by the host OS as a single disk ...
15:28 semiosis zetheroo: however if you're only using one server and want to combine two disks into a network accessible shared storage, you could just use lvm + nfs.  glusterfs lets you combine many servers together
15:28 zetheroo I am trying to follow this ... http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
15:28 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
15:29 zetheroo well the picture is that we have 3 servers which each have 2 x 3TB HDD's for this GlusterFS setup ...
15:29 JoeJulian Nagilum: Still true. Left side of replica pairs get added. I'm not sure what will happen if their are conflicting files though.
15:29 zetheroo all three servers are KVM hosts and we want the KVM images to be running from the GlusterFS environment
15:30 Nagilum JoeJulian: k, thx
15:30 JoeJulian their? I need caffeine
15:30 zetheroo and we want an identical copy of all VM's on all three servers - so a replica going on there
15:31 semiosis zetheroo: use quorum
15:31 mooperd joined #gluster
15:32 zetheroo so I was thinking that we would want to have the 2 x 3TB HDD's on each server to act as a single 6TB HDD to the host OS (Ubuntu Server 12.04), and have all the running VM images on each servers pair of HDD's ...
15:32 zetheroo quorum?
15:32 semiosis zetheroo: sure you can do that
15:33 semiosis zetheroo: quorum is an option you can enable so that clients will only write if they can write to a majority of replica bricks.  this prevents split brain
15:34 zetheroo can this be enabled at any point in or after setup? ... or does it have to be taken into account at a particular stage?
15:35 zetheroo thing is that all of these 3 KVM servers are currently running VM's so I have to be careful :P
15:35 rwheeler_ joined #gluster
15:36 semiosis you can set options any time, or never
15:36 Staples84 joined #gluster
15:36 zetheroo semiosis: do you know of any online How-Tos which deal with something similar to what I want to do here ... ?
15:36 zetheroo ok good
15:36 semiosis zetheroo: you should write the how-to!
15:37 zetheroo haha ... if I can get this to work I probably will ;)
15:37 semiosis no i dont know of any
15:37 semiosis awesome
15:37 zetheroo why do I get the feeling that I am one of very few to attempt this scenario!?
15:38 tjstansell zetheroo: i don't think that's true.  it's just that nobody goes to the trouble to document things after they do it for the rest of us :)
15:38 zetheroo so when I start actually setting this up do I first setup 2 of the servers to replicate and then add the 3rd server to the setup?
15:39 semiosis i'd start with all three
15:39 zetheroo does there have to be a master and slave ... or do they all copy from each other?
15:39 semiosis gluster volume create foo replica 3 brick brick brick
15:39 semiosis there is no master, glusterfs is completely distributed/decentralized
15:39 * semiosis loves saying that
15:40 zetheroo ok
15:40 zetheroo so how does it know what to replicate? is it just going by the most recent data ?
15:40 wica zetheroo: The client, sends it to every brick
15:40 semiosis if you really want to know, read jdarcy's article about ,,(extended attributes)
15:40 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
15:41 wica zetheroo: So the client is "in control" of the "replication"
15:41 zetheroo not sure I understand about the client ... since these are all going to be both client and server... no!?
15:41 wica in our case. The client will send the file to 3 bricks
15:42 wica intern, glusterfs checkes if the replication is haelty
15:42 zetheroo so you actually set which server is the client and which is the server!?
15:42 wica Nop
15:42 tjstansell client/server are processes... not hosts
15:42 semiosis zetheroo: ,,(processes)
15:42 glusterbot information.
15:42 glusterbot zetheroo: the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more
15:43 zetheroo ok, I need to explain precisely what I mean to ask ... :)
15:44 wica Ahh a 42 :)
15:44 wica You need to know the question, to understand the answer :)
15:45 dblack joined #gluster
15:46 zetheroo I am running 2 servers ... server1 and server2 ... server1 has VMa running on it's glusterfs bricks, and we need the image file of VMa to be replicated to the glusterfs bricks on server2. Now server2 also has a VM running on it's glusterfs bricks called VMb ... and we need the image file of VMb to be replicated to the glusterfs bricks on server1 ....
15:46 zetheroo how is this accomplished? :D
15:48 lh joined #gluster
15:48 lh joined #gluster
15:49 andreask zetheroo: use a replicated volume ... or do I misunderstand your question?
15:51 zetheroo ok, how do you tell a brick to replicate itself to another brick? (or is this the wrong question!?) and in this way one has to be the "master" no!? or is it that the data is written to both bricks simultaneously?
15:51 semiosis no master
15:52 semiosis normally you start with empty bricks and add all data to the volume through a client mount point
15:52 semiosis it is possible to start with preloaded bricks in a pure replicate volume, but care must be taken.  it's easiest if one brick is preloaded and the other(s) is/are empty.
15:53 semiosis clients write to all replicas in sync
15:53 semiosis remember, client is a mount point (and process) not a type of machine
15:53 zetheroo if data is written to all replicated volumes simultaneously isn't that going to be heavy on the LAN?
15:53 semiosis you tell me
15:54 cw joined #gluster
15:54 semiosis my lan (ec2) handles it fine
15:54 zetheroo would that effect the performance of running VM's if all the data being written and read has to be done at the same time on numerous hosts?!
15:55 sahina joined #gluster
15:55 semiosis fast, cheap, reliable -- pick two
15:57 zetheroo can glusterfs be configured to use it's own designated networked links from one server to another - without going through a router or switch?
15:57 semiosis you can use ,,(hostnames) in glusterfs and map them to specific IPs on a private network
15:57 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
15:59 vpshastry joined #gluster
16:01 portante joined #gluster
16:04 plarsen joined #gluster
16:10 zetheroo each server has two network interfaces that are unused ...
16:15 awheeler_ Anybody know why gluster won't create volumes with dots '.' in the name?  I'm using swauth for authentication and it wants to use account .auth, and when I issue the command to create the .auth volume, it just shows me the help.
16:18 Mo___ joined #gluster
16:21 awheeler_ It doesn't even say that it's an invalid volname.
16:22 zetheroo left #gluster
16:25 awheeler_ So, it looks like it's just not valid in cli-cmd-parser.c
16:34 bstansell joined #gluster
16:41 rgustafs joined #gluster
16:41 bennyturns joined #gluster
16:46 hagarth joined #gluster
16:48 jdarcy joined #gluster
16:49 awheeler_ what happened to gluster volume rename?
17:15 lpabon joined #gluster
17:24 manik joined #gluster
17:26 Humble_afk joined #gluster
17:27 Ryan_Lane joined #gluster
17:34 cyberbootje joined #gluster
17:35 transitdk joined #gluster
17:35 transitdk Hello all; a bunch of my files are now 0-length as seen through the mount. What can I do to get the data back if anything?
17:46 dblack joined #gluster
17:50 GabrieleV joined #gluster
17:50 _pol joined #gluster
17:57 rob__ joined #gluster
17:59 Humble_afk joined #gluster
17:59 tqrst JoeJulian: has "gluster espresso mocha size grande milk breve --nofoam" been fixed yet? The issue is still open on github.
17:59 tqrst clearly the devs do not take bug reports seriously enough
18:00 tqrst it has been opened for 10 months!
18:00 cw joined #gluster
18:00 tqrst (https://github.com/gluster/glusterfs/issues/8)
18:00 glusterbot Title: 3.3.0qa43 does it wrong... · Issue #8 · gluster/glusterfs · GitHub (at github.com)
18:11 glusterbot New news from newglusterbugs: [Bug 921215] Cannot create volumes with a . in the name <http://goo.gl/adxIy>
18:21 neofob left #gluster
18:26 mohankumar joined #gluster
18:28 hagarth JoeJulian: I do keep looking into the backport wiki page.
18:28 hagarth tqrst: what do you get as the output now?
18:31 tjstansell joined #gluster
18:35 mooperd joined #gluster
18:36 disarone joined #gluster
18:37 aliguori joined #gluster
18:40 zaitcev joined #gluster
19:00 nocko transitdk: You should see if the files are intact on the brick(s).
19:02 nocko If the file looks good on one of the bricks, follow JoeJulian's advice here: http://www.joejulian.name/blog/fix​ing-split-brain-with-glusterfs-33/
19:02 glusterbot <http://goo.gl/FzjC6> (at www.joejulian.name)
19:38 jdarcy joined #gluster
19:46 robo joined #gluster
19:47 mooperd joined #gluster
19:49 Gilbs joined #gluster
19:52 Gilbs How do I upgrade to 3.3.1 on ubuntu if I installed the package glusterfs_3.3.0-1_amd64.deb, when there is anow a server/client in apt-get? (with the new ppa builds)
19:53 social__ joined #gluster
19:53 social__ Anyone seen something like  XATTROP <gfid:eea18e41-7a07-4f06-87c1-f7be3ec5dcc6> (eea18e41-7a07-4f06-87c1-f7be3ec5dcc6) ==> -1 (No such file or directory)
19:54 social__ starting with  [server3_1-fops.c:1848:server_xattrop_cbk]
19:55 social__ I get this on one node and on the other I can see E [posix.c:3197:do_xattrop]getxattr failed on /mnt/gluster/Staging/.glusterfs/ee/a1​/eea18e41-7a07-4f06-87c1-f7be3ec5dcc6 while doing xattrop: Key:trusted.afr.Staging-client-2 (No such file or directory)
19:55 social__ this means replicas are broken?
19:58 NcA^_ joined #gluster
20:04 cw joined #gluster
20:20 rwheeler joined #gluster
20:45 hateya_ joined #gluster
20:56 mooperd joined #gluster
21:06 jcaputo joined #gluster
21:13 neofob joined #gluster
21:47 Troy joined #gluster
21:49 Staples84 joined #gluster
21:50 duerF joined #gluster
21:54 Humble joined #gluster
22:08 Troy File "/usr/libexec/glusterfs/pytho​n/syncdaemon/syncdutils.py", line 210, in twrap
22:09 Troy any thoughts guys
22:11 semiosis make a symlink
22:11 Troy how and where?
22:12 semiosis hmm maybe you should explain your problem first
22:13 Troy well ... I am trying to configure geo-replicaiton
22:14 Troy gluster volume geo-replicaiton gv0 backup:.............. config .. remote_gsyncd ...
22:14 Troy dont work .. says command failed
22:14 Troy I see that pythong error in log
22:15 Troy it say staus faulty
22:15 Troy but after some tiem .. status is OK
22:15 Humble joined #gluster
22:15 Troy I check gluster volume geo-rep ... status .. it says status OK
22:15 semiosis debian?
22:15 Troy redhat
22:16 semiosis hmm idk
22:16 Troy redhat 6.
22:16 Troy I try to stop it but error
22:16 semiosis thought maybe you were seeing bug 895656 but now i'm not sure about that
22:16 Troy I try to start it ... it says already started :(
22:16 glusterbot Bug http://goo.gl/ZNs3J unspecified, unspecified, 3.4.0, csaba, ON_QA , geo-replication problem (debian) [resource:194:logerr] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory
22:17 semiosis check your logs, maybe the glusterd log (/var/log/glusterfs/etc-glusterfs-glusterd.log) or possibly others
22:17 semiosis i havent dug deep into geo-rep yet, maybe someone else is around who can help
22:18 Troy thanks semiosis
22:19 semiosis yw
22:19 Troy I am now trying to update redhat .. there were few updates available
22:19 Troy lets see if it helps ....
22:24 JoeJulian I'm confused. "status OK". What are you trying to fix?
22:26 hattenator joined #gluster
22:27 Troy I am having this strange error
22:27 Troy I try to configure geo-replication
22:27 Troy geo-replication config command says .. failed
22:27 JoeJulian I read what you typed already. It says failed, but later says OK
22:27 Troy and I then see this python error in log
22:27 Troy yes
22:28 Troy let me past log
22:28 JoeJulian Once it's OK, is it working?
22:29 Troy http://fpaste.org/zMDf/
22:29 glusterbot Title: Viewing Paste #284620 (at fpaste.org)
22:29 Troy this is wiaht it says
22:29 Troy I can see updates in backup folder
22:29 Troy but now status updat in log file
22:31 Troy now = no
22:31 Troy typo
22:32 Troy Traceback (most recent call last): --- in log file .. this is where I tried to configure geo-repli ..
22:34 JoeJulian Hmm, the thread's getting an EBADF from somewhere.
22:35 JoeJulian Are there any logs client-side?
22:37 Troy http://fpaste.org/3fER/
22:37 glusterbot Title: Viewing Paste #284623 (at fpaste.org)
22:37 Troy here is what I can see at client side .. geo-replication-slave  .. log file
22:41 mooperd joined #gluster
22:42 Humble joined #gluster
22:43 Troy etc-glusterfs-glusterd.vol.log says [2013-03-13 15:09:59.677265] I [glusterd-handler.c:1885:glusterd_handle_getwd] 0-glusterd: Received getwd req
22:44 JoeJulian Do you have on of those around [2013-03-13 15:09:45.781866] ?
22:44 JoeJulian s/on/one/
22:44 Troy yes
22:44 glusterbot What JoeJulian meant to say was: Do you have one of those around [2013-03-13 15:09:45.781866] ?
22:44 Troy same time
22:47 mooperd joined #gluster
22:48 JoeJulian Anything just above that?
22:49 Troy the same
22:49 Troy [2013-03-13 15:03:42.399279] I [glusterd-handler.c:1885:glusterd_handle_getwd] 0-glusterd: Received getwd req
22:49 Troy [2013-03-13 15:03:14.044282] I [glusterd-handler.c:1885:glusterd_handle_getwd] 0-glusterd: Received getwd req
22:49 Troy nothing between [2013-03-13 15:03:42.399279] & [2013-03-13 15:09:59.677265]
22:50 JoeJulian See if this works: gluster system:: getwd
22:52 Troy it says /var/lib/glusterd
22:54 JoeJulian Seems like it's getting an EBADF reading something under that directory. Maybe do a find /var/lib/glusterd and see if anything throws an error?
22:55 Troy cd /var/lib/glusterd
22:55 Troy touch ping-pong.txt
22:56 Troy at root
22:56 Troy find -name ping-pong.txt
22:56 Troy result
22:56 Troy = found it
22:56 Troy no errors
22:56 jdarcy joined #gluster
23:08 _benoit_ joined #gluster
23:19 Troy no erros listed complete directories
23:42 Humble joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary