Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 asias joined #gluster
00:37 bala joined #gluster
00:40 msciciel_ joined #gluster
00:40 yinyin joined #gluster
00:51 msciciel joined #gluster
01:29 asias joined #gluster
01:45 yinyin joined #gluster
01:50 Shdwdrgn joined #gluster
02:02 Guest77900 joined #gluster
02:42 ninkotech__ joined #gluster
02:46 lalatenduM joined #gluster
02:47 jebba joined #gluster
02:47 semiosis joined #gluster
02:54 badone joined #gluster
02:58 kshlm joined #gluster
03:01 aravindavk joined #gluster
03:01 chirino joined #gluster
03:03 vpshastry joined #gluster
03:03 vpshastry left #gluster
03:08 bharata joined #gluster
03:15 Recruiter joined #gluster
03:18 shubhendu joined #gluster
03:25 bulde joined #gluster
03:29 shylesh joined #gluster
03:35 itisravi joined #gluster
03:40 Peanut_ joined #gluster
03:44 bulde joined #gluster
03:53 ndarshan joined #gluster
03:56 ndarshan joined #gluster
04:05 ndarshan joined #gluster
04:06 sgowda joined #gluster
04:13 rjoseph joined #gluster
04:19 ppai joined #gluster
04:22 The_Ugster blist
04:22 * The_Ugster facepalms
04:22 yinyin joined #gluster
04:24 georgeh|workstat joined #gluster
04:27 mohankumar joined #gluster
04:27 CheRi joined #gluster
04:29 ngoswami joined #gluster
04:36 kanagaraj joined #gluster
04:45 trifonov joined #gluster
04:48 kshlm joined #gluster
04:48 kshlm joined #gluster
04:49 vpshastry joined #gluster
04:50 vpshastry left #gluster
04:52 hagarth joined #gluster
05:00 raghu joined #gluster
05:01 raghu joined #gluster
05:01 RameshN joined #gluster
05:08 deepakcs joined #gluster
05:20 bala joined #gluster
05:26 lalatenduM joined #gluster
05:27 nightwalk joined #gluster
05:27 lalatenduM joined #gluster
05:27 bulde joined #gluster
05:27 yinyin joined #gluster
05:33 rjoseph joined #gluster
05:35 shruti joined #gluster
05:38 vijaykumar joined #gluster
05:39 glusterbot New news from resolvedglusterbugs: [Bug 927146] AFR changelog vs data ordering is not durable <http://goo.gl/jfrtO>
05:41 Kyreeth joined #gluster
05:48 Kyreeth Is it a known issue that the download.gluster.org servers do not have the glusterfs-libs and glusterfs-cli RPMs for CentOS?
05:51 samppah Kyreeth: i tried to upgrade last night and that's what it seems like
05:52 thomaslee joined #gluster
05:57 psharma joined #gluster
05:59 Kyreeth It also looks like there's a problem with the rpm source SPEC file on CentOS 6: rpmbuild --rebuild fails with error: line 234: Unknown tag:     %filter_provides_in /usr/lib64/glusterfs/3.4.0/
06:00 Kyreeth Per https://fedoraproject.org/wiki/Packag​ing:AutoProvidesAndRequiresFiltering, it _looks_ like the %filter_provides_in macro has been replaced.
06:00 glusterbot <http://goo.gl/g1yquP> (at fedoraproject.org)
06:00 andrewklau joined #gluster
06:00 Kyreeth Under CentOS 6.4, at least.
06:01 andrewklau I'm trying to install gluster 3.4 on Centos 6.4, getting some package errors. Any ideas?:
06:01 andrewklau Error: Package: glusterfs-3.4.0-7.el6.x86_64 (glusterfs-epel)
06:01 andrewklau Requires: libgfxdr.so.0()(64bit)
06:01 andrewklau Error: Package: glusterfs-3.4.0-7.el6.x86_64 (glusterfs-epel)
06:01 andrewklau Requires: libglusterfs.so.0()(64bit)
06:01 andrewklau Error: Package: glusterfs-3.4.0-7.el6.x86_64 (glusterfs-epel)
06:01 andrewklau Requires: libgfrpc.so.0()(64bit)
06:02 Kyreeth Andrew: Yes, I'm fighting with the same thing. It looks like a new pair of RPMs was added earlier today, but has not been mirrored to the distro site.
06:03 Kyreeth If you're on something other than CentOS/RHEL 6, you could try building from the source rpm.
06:03 andrewklau Kyreeth: Damn bad timing :(
06:03 aravindavk andrewklau, http://download.gluster.org/pub/gluster/g​lusterfs/3.4/3.4.0/CentOS/epel-6/x86_64/
06:03 glusterbot <http://goo.gl/u4SjcT> (at download.gluster.org)
06:03 Kyreeth aravindavk: glusterfs-libs and glusterfs-cli are missing.
06:04 aravindavk Kyreeth, ok
06:04 vimal joined #gluster
06:05 andrewklau Does anyone have any idea when it'll by updated?
06:09 Kyreeth Ah! The problem with the spec file is with the if/then/else test on CentOS, by the looks of things.
06:11 Kyreeth Andrew: If you're on RHEL6, it'll likely build cleanly from the source RPM.
06:12 kshlm kkeithley maintains them. the glusterfs-{libs,cli} packages were introduced in the latest build and probably his scripts missed uploading the new packages.
06:14 andrewklau Kyreeth: Centos 6.4 - I guess I'll be playing the waiting game.
06:20 andrewklau kshlm: do we know when they'll be updated? I need to make plans on how much time I've got left
06:25 andreask joined #gluster
06:25 Recruiter joined #gluster
06:26 samppah andreask:
06:26 samppah oops
06:27 samppah andrewklau: there seems to be old packages available at http://download.gluster.org/pub​/gluster/glusterfs/LATEST/old/ if you need to get going quickly
06:27 glusterbot <http://goo.gl/uoLuKq> (at download.gluster.org)
06:27 andrewklau samppah: thanks for the link
06:51 hybrid512 joined #gluster
07:00 rastar joined #gluster
07:08 ekuric joined #gluster
07:09 guigui1 joined #gluster
07:10 eseyman joined #gluster
07:11 ndevos andrewklau, Kyreeth: you can also download the epel packages from http://koji.fedoraproject.org/​koji/buildinfo?buildID=454327
07:11 glusterbot <http://goo.gl/1pffK4> (at koji.fedoraproject.org)
07:11 ndevos or, 'yum --enablerepo=epel-testing install glusterfs-server'
07:11 trifonov joined #gluster
07:12 andrewklau ndevos: I went and set it up using 3.4.0-3 instead, thanks.
07:13 ndevos Kyreeth: and the %filter_provides_in macro comes from the redhat-rpm-config package, which should be installed on any build-system
07:14 vpshastry joined #gluster
07:15 vpshastry left #gluster
07:16 thomasle_ joined #gluster
07:17 Kyreeth OK, line 232 of the spec file should read "%if ( 0%{?rhel} && 0%{?rhel} < 6 )"
07:19 Kyreeth That fixes building from source on this Centos 6.4 box.
07:20 Kyreeth Though it doesn't fix the file conflict.
07:20 vpshastry1 joined #gluster
07:22 Kyreeth ndevos: %filter_provides_in was replaced by %__provides_exclude_from in RHEL6
07:22 Kyreeth ndevos: if I'm reading https://fedoraproject.org/wiki/Packag​ing:AutoProvidesAndRequiresFiltering correctly.
07:22 glusterbot <http://goo.gl/cMDPaZ> (at fedoraproject.org)
07:25 maco_ joined #gluster
07:27 Recruiter joined #gluster
07:28 Kyreeth ndevos: Ah, I see your point re redhat-rpm-config. I wonder if the source RPM should depend on it, then?
07:36 ndevos Kyreeth: no, redhat-rpm-config is one of the rpms that is assumed to be installed on systems that build rpms
07:36 ndevos like gcc, make and some others
07:37 Kyreeth Fair enough. The source RPM does list requires for gcc, make, etc. And it looks like my file conflict was coming from an old version of one of the packages.
07:38 ndevos hmm, a source rpm is not required to buildreq gcc etc... oh well
07:45 Kyreeth Both my built packages and the glusterfs-server package currently on download.gluster.org seem to not install the /etc/init.d/glusterfsd init file, causing the following error during package installation: error reading information on service glusterfsd: No such file or directory
07:48 andrewklau left #gluster
07:50 ndevos well, there is no /etc/init.d/glusterfsd for all I know (there was one at a time, but that has been many releases ago)
07:52 vpshastry1 left #gluster
07:52 Kyreeth It's included in the source RPM, and mentioned in the spec file along with /etc/init.d/glusterd, which does wind up getting installed.
07:55 ndevos seems that this topic was not discussed to an end yet... the glusterfsd service would only stop (not start) glusterfsd processes. If that is needed, or not, depends on the view :-/
07:55 SynchroM joined #gluster
07:56 Kyreeth OK. The package install scripts do try to activate it, though, resulting in an error:
07:56 Kyreeth + /sbin/chkconfig --add glusterfsd
07:56 Kyreeth error reading information on service glusterfsd: No such file or directory
07:58 dobber_ joined #gluster
07:58 trifonov joined #gluster
07:59 puebele1 joined #gluster
08:00 andrewklau joined #gluster
08:01 andrewklau Lets say I want to eventually have a distributed replicate volume, but I only have two bricks to deploy now. What is my best option
08:03 ndevos Kyreeth: i am not sure if someone filed a bug for that already, if you can not find one, maybe you can file it?
08:04 ndevos andrewklau: pick one of replicate or distribute now, and when you have two more bricks you can add the bricks and change the layout (one command)
08:04 andrewklau ndevos: Thanks a lot
08:05 ndevos you're welcome :)
08:14 eseyman I'm creating /exports/audio on my glusterfs server and mounting it on 4 clients. When 1 client creates a file, the 3 others can see it but it doesn't appear in /exports/audio on the server.
08:14 eseyman is this normal or do I have the mount the volume on the server as well ?
08:14 eseyman glusterfs 3.2.7, btw
08:19 trifonov joined #gluster
08:19 puebele joined #gluster
08:28 ujjain joined #gluster
08:36 piotrektt joined #gluster
08:36 atrius joined #gluster
08:42 bharata-rao joined #gluster
08:42 ndevos 7~/win 3
08:42 ndevos glusterbot: meh
08:42 glusterbot ndevos: I'm not happy about it either
08:44 andrewk joined #gluster
08:48 bulde joined #gluster
08:48 shylesh joined #gluster
08:53 andrewk_ joined #gluster
08:55 rwheeler joined #gluster
09:03 shylesh joined #gluster
09:04 vimal joined #gluster
09:15 thommy_ka joined #gluster
09:17 meghanam joined #gluster
09:18 meghanam joined #gluster
09:24 andrewke_ joined #gluster
09:24 andrewkember joined #gluster
09:26 rastar left #gluster
09:27 rastar joined #gluster
09:54 satheesh joined #gluster
09:55 dusmant joined #gluster
09:59 sgowda joined #gluster
10:00 shylesh joined #gluster
10:01 andreask joined #gluster
10:02 RameshN joined #gluster
10:06 andrewklau left #gluster
10:15 aravindavk joined #gluster
10:27 kkeithley1 joined #gluster
10:28 andrewkember Hello folks. Has anybody else noticed that yesterday's Gluster release won't even install on RHEL 6?
10:30 _br_ joined #gluster
10:30 andrewkember (http://pastebin.com/vz8wYeCv)
10:31 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
10:33 andrewkember (Sorry - http://fpaste.org/30325/)
10:33 glusterbot Title: #30325 Fedora Project Pastebin (at fpaste.org)
10:36 ndevos andrewkember: that is missing the glusterfs-libs package, what repo are you using?
10:37 maco_ joined #gluster
10:37 edward2 joined #gluster
10:37 andrewkember ndevos: Thanks for taking a look. That uses http://download.gluster.org/pub/gluster/glu​sterfs/LATEST/EPEL.repo/glusterfs-epel.repo as per the Gluster docs
10:38 glusterbot <http://goo.gl/5beCt> (at download.gluster.org)
10:39 andrewkember ndevos: glusterfs-libs refuses to install with the same error
10:39 ndevos andrewkember: I suggest you download the glusterfs-libs (x86_64) from the build directly and install that too: http://koji.fedoraproject.org/​koji/buildinfo?buildID=454327
10:39 glusterbot <http://goo.gl/1pffK4> (at koji.fedoraproject.org)
10:39 andrewkember ndevos: Thanks - I'll try that right now
10:40 ndevos kkeithley1: I'm not sure how you sync rpms to http://download.gluster.org/pub/gluster/gl​usterfs/3.4/3.4.0/EPEL.repo/epel-6/x86_64/ , but -cli and -libs are missing :-/
10:40 glusterbot <http://goo.gl/ssLmF9> (at download.gluster.org)
10:41 kkeithley1 I'm fixing that now
10:42 ndevos thanks!
10:42 thomas___ joined #gluster
10:43 andrewkember ndevos: Many thanks - Gluster client now installed correctly.
10:43 andrewkember kkeithley_: Many thanks
10:50 kkeithley_ repo on download.gluster.org now has the -cli and -libs RPMs
10:51 ababu joined #gluster
10:53 tziOm joined #gluster
10:53 vimal joined #gluster
10:56 andrewkember kkeithley_: Crumbs - that was fast. Thanks!
11:08 mooperd joined #gluster
11:09 kkeithley_ Yeah, I have a script. But I have to remember to update the script. Note to self, don't do things late at night when you're tired.
11:11 kkeithley_ and I'll be putting up 3.4.0-8 shortly, to fix another thing ndevos pointed out.
11:17 rastar joined #gluster
11:17 CheRi joined #gluster
11:22 hagarth joined #gluster
11:29 chirino joined #gluster
11:33 manik joined #gluster
11:33 manik joined #gluster
11:38 eseyman joined #gluster
11:49 andrewkember kkeithley_: If you don't mind me asking, will my glusterfs-libs installed as per ndevos comment with rpm -i [rpmfile] be updated by yum as needed with the new release? (I have the glusterfs latest repo configured)
11:50 vimal joined #gluster
11:52 maco_ joined #gluster
11:52 B21956 joined #gluster
11:53 rcheleguini joined #gluster
11:56 kkeithley_ yes
11:56 kkeithley_ andrewkember: ^^^
11:57 andrewkember kkeithley_: Many thanks - I appreciate your time and work on this.
11:57 kkeithley_ yw
11:59 itisravi_ joined #gluster
12:01 kkeithley_ rpm -i [rpmfile] is fine. I generally use `yum localinstall [rpmfile]` instead. Then yum won't whine about the rpm database being modified outside of yum
12:03 maco_ Hi - gluster newbie here: What is the current state of support for gluster on Mac OS X?  I see a few references to OSX but no dmg or docs for getting started
12:04 elyograg joined #gluster
12:07 kkeithley_ Current source doesn't build on OS X.
12:12 shruti joined #gluster
12:17 manik joined #gluster
12:17 maco_ kkeithley_: Ok, are there any workable options for OS X?   http://www.gluster.org/community/documen​tation/index.php/Using_Gluster_with_OSX suggests a previous beta needs a few workarounds but functions
12:17 glusterbot <http://goo.gl/gT2aoE> (at www.gluster.org)
12:21 kkeithley_ I can't imagine how old that beta must be — it hasn't ever worked on OS X during my time working on it, currently 2+ years. There are a couple people who tinker with getting it to work on OS X again, but there's been no progress that I'm aware of.
12:23 dusmant joined #gluster
12:26 maco_ Thanks - Guess I need to look at other options which is a pity since gluster was looking like a neat solution!
12:26 shubhendu joined #gluster
12:28 kanagaraj joined #gluster
12:30 stickyboy !repos
12:31 stickyboy Err.
12:31 samppah @repo
12:31 glusterbot samppah: I do not know about 'repo', but I do know about these similar topics: 'git repo', 'ppa repo', 'repos', 'repository', 'yum repo', 'yum3.3 repo', 'yum33 repo'
12:33 stickyboy @yum33 repo
12:33 glusterbot stickyboy: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
12:34 stickyboy Annnnnd kkeithley's is retired, I know that much. :D
12:34 stickyboy @yum repo
12:34 glusterbot stickyboy: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
12:34 kkeithley_ @forget yum33
12:34 glusterbot kkeithley_: Error: There is no such factoid.
12:34 kkeithley_ forget yum 33 repo
12:34 edward1 joined #gluster
12:35 kkeithley_ @forget yum33 repo
12:35 glusterbot kkeithley_: The operation succeeded.
12:35 kkeithley_ @yum3.3 repo
12:35 glusterbot kkeithley_: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
12:35 kkeithley_ @forget yum3.3 repo
12:35 glusterbot kkeithley_: The operation succeeded.
12:37 rwheeler joined #gluster
12:38 stickyboy @yum repo
12:38 glusterbot stickyboy: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
12:38 stickyboy http://goo.gl/s077x is likely incorrect.
12:38 glusterbot Title: Index of /pub/gluster/glusterfs/repos/YUM (at goo.gl)
12:39 stickyboy As it's not the latest 3.3.x or the latest 3.4.x :D
12:39 sgowda joined #gluster
12:39 kkeithley_ looks correct to me
12:39 samppah @latest
12:39 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
12:39 stickyboy http://goo.gl/s077x points to a repo using 3.3.1.
12:39 glusterbot Title: Index of /pub/gluster/glusterfs/repos/YUM (at goo.gl)
12:39 stickyboy Surely that's not correct.
12:40 stickyboy It's kinda mind boggling to navigate those directories, though. :D
12:42 piotrektt joined #gluster
12:46 recidive joined #gluster
12:46 kkeithley_ the glusterfs-3.3 is borked. the -3.4 was okay. It seems like I keep untangling these. Maybe someone is going behind my back and tangling them up again?
12:46 arusso joined #gluster
12:47 stickyboy joined #gluster
12:48 awheeler joined #gluster
12:56 jclift_ joined #gluster
13:00 bennyturns joined #gluster
13:01 stickyboy kkeithley_: Are you messing around with the directory structure? :)
13:09 dusmant joined #gluster
13:14 awheeler joined #gluster
13:16 awheele__ joined #gluster
13:17 awheeler joined #gluster
13:18 awheele__ joined #gluster
13:21 shubhendu joined #gluster
13:24 andreask joined #gluster
13:24 CheRi joined #gluster
13:26 kanagaraj joined #gluster
13:30 zaitcev joined #gluster
13:32 piotrektt joined #gluster
13:38 aliguori joined #gluster
13:39 shylesh joined #gluster
13:46 awheeler joined #gluster
13:46 manik joined #gluster
13:47 jebba joined #gluster
13:49 piotrektt joined #gluster
13:54 Norky joined #gluster
14:01 bugs_ joined #gluster
14:13 kaptk2 joined #gluster
14:14 tzero joined #gluster
14:16 kuroneko4891 joined #gluster
14:17 premera joined #gluster
14:19 piotrektt joined #gluster
14:24 piotrektt joined #gluster
14:25 chirino joined #gluster
14:27 trifonov joined #gluster
14:28 hagarth joined #gluster
14:50 _pol joined #gluster
14:56 bala joined #gluster
15:00 lpabon joined #gluster
15:07 elyograg i've got an install on centos 6.  two storage servers with bricks and two servers with a shared IP that provide NFS and Samba access to gluster to the network.  those two are gluster peers, but have no bricks.
15:08 bulde joined #gluster
15:09 georgeh|workstat joined #gluster
15:09 elyograg it's running 3.3.1 from kkeithley's repo.  if everything's in order, is an in-place upgrade to 3.4 with no downtime something I can be fairly well assured will go correctly? if so, what should I check to be sure there's no issues before starting?
15:11 zetheroo1 joined #gluster
15:11 eseyman joined #gluster
15:12 zetheroo1 how does one get gluster to mount on boot? Placing the mount string into fstab doesn't work ...
15:12 neofob joined #gluster
15:13 chirino joined #gluster
15:14 kkeithley_ elyograg: I think it should. Other people have done it with no problems.
15:20 andrewkember zetheroo1: It works for me (RHEL6). Does your mount work from fstab if you mount it manually (sudo mount /mnt/[glustermount])?
15:20 zetheroo1 it mounts after boot if I push it manually
15:21 zetheroo1 I am now trying an init script solution: http://www.unix-heaven.org/glust​erfs-fails-to-mount-after-reboot
15:21 andrewkember zetheroo1: Okay. Are you using NFS, or the gluster client?
15:21 glusterbot <http://goo.gl/3hhfya> (at www.unix-heaven.org)
15:21 sprachgenerator joined #gluster
15:22 zetheroo1 afaik I am using the gluster client
15:23 zetheroo1 never setup anything to do with NFS on the hosts so I don't think I am using that
15:24 ndevos zetheroo1: have you set the _netdev option (for fedora/rhel like distributions)?
15:24 andrewkember You can tell by looking at your mount line in /etc/fstab. if you have glusterfs as your filesystem, it's the gluster client.
15:24 zetheroo1 that option (as per the link I posted) does not seem to work in Ubuntu and Debian
15:25 zetheroo1 our hosts are running Ubuntu 12.04.2
15:26 andrewkember If Ubuntu uses NFS v4, that could provide a reason, if not a solution: https://help.ubuntu.com/community/NFSv4Howto
15:26 glusterbot Title: NFSv4Howto - Community Ubuntu Documentation (at help.ubuntu.com)
15:26 andrewkember Sorry - more specific link: https://help.ubuntu.com/commu​nity/NFSv4Howto#NFSv4_Client
15:26 glusterbot <http://goo.gl/nUoabU> (at help.ubuntu.com)
15:27 zetheroo1 currently I am using this command to mount the gluster:
15:27 zetheroo1 mount -t glusterfs neptune:/gluster /mnt/gluster
15:28 ndevos _netdev is indeed not available for debian and the likes, I dont know what their equivalent is (but I suspect semiosis knows)
15:30 zetheroo1 left #gluster
15:33 dewey joined #gluster
15:35 ultrabizweb joined #gluster
15:36 vimal joined #gluster
15:42 dewey Can anyone point me to a web resource discussing someone's successful deployment of GlusterFS as an HDFS replacement?
15:42 bala joined #gluster
15:51 lalatenduM joined #gluster
15:53 ntt_ joined #gluster
15:53 ntt_ hi
15:53 glusterbot ntt_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:53 awheeler joined #gluster
15:53 Amanda joined #gluster
15:55 ntt_ I have a question about networking in gluster. I have a glusterfs replicated over 2 servers: 10.0.0.1 and 10.0.0.2. I would that all "replicating" traffic goes over this network. My servers have another NIC with ip 192.168.0.1 and 192.168.0.2. How can i reach my glusterfs from 192.168.0.*  if i use 10.0.0.* as gluster network?
15:56 ntt_ there is a canonical approach to this problem?
15:58 manik joined #gluster
16:05 aknapp joined #gluster
16:06 aknapp Can anyone recommend a good site for tuning Gluster parameters?
16:06 Hoggins joined #gluster
16:08 Hoggins hi folks ! I have a strange issue after having upgraded my nodes and my clients from 3.3 to 3.4. Now, whenever I take down one of my nodes (replica 2), the mounts hang on the clients... Did I miss something ?
16:10 redragon_ so has anyone used glusterfs with RHCS?
16:11 semiosis ntt_: when using glusterfs native fuse clients the replication is done by the clients themselves
16:11 semiosis ntt_: if you use nfs clients then you can route repl traffic between the servers over specific interfaces using ,,(split horizon) dns
16:11 glusterbot semiosis: Error: No factoid matches that key.
16:11 semiosis glusterbot: meh
16:11 glusterbot semiosis: I'm not happy about it either
16:12 jthorne joined #gluster
16:17 semiosis Hoggins: check the client log file.  feel free to ,,(paste) it and give us the link
16:17 glusterbot Hoggins: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
16:17 ntt_ semiosis: sorry, i don't understand. I would access to my glusterfs from another subnet (different from 10.0.0.* used in the trusted pool). Moreover i think: if i write a file from client 1 ("external" subnet) to my glusterfs, i would like that "replicating" traffic doesn't pass in 192.168.0.* but uses 10.0.0.* that is the subnet used to create the trusted pool. Maybe that this is an automatic behavior.
16:17 semiosis aknapp: if you want to know what options are available for tuning, see command 'gluster volume set help' and also the ,,(undocumented options)
16:17 glusterbot aknapp: Undocumented options for 3.4: http://goo.gl/Lkekw
16:18 semiosis aknapp: but if you want a site that tells you what values to set, i dont know of one
16:19 manik joined #gluster
16:20 semiosis ntt_: i gave you the best answer i know of.  repeating your question won't change that, sorry.
16:20 semiosis ntt_: fuse clients do replication themselves... they send traffic to all replicas at the same time
16:21 semiosis bbiab
16:22 Technicool joined #gluster
16:22 bulde joined #gluster
16:22 ntt_ semiosis: but how can i connect to 10.0.0.1 from 192.168.0.1? Using /etc/hosts?
16:22 aknapp semiosis: thanks for the heads up.
16:26 trifonov joined #gluster
16:27 redragon_ might I make a suggestion that you add a configuration file for the startup script of glusterd so that you can define mount points to check for before starting glusterd
16:41 Mo_ joined #gluster
16:41 _pol joined #gluster
16:42 Hoggins thanks semiosis : here is the excerpt of what is happening on the client. The very last few lines are not relevant : http://ur1.ca/exbsp
16:42 glusterbot Title: #30409 Fedora Project Pastebin (at ur1.ca)
16:44 chirino joined #gluster
16:52 Hoggins semiosis : I note a surprising detail in the log I pasted. It says "Using Program GlusterFS 3.3, Num (1298437), Version (330)" but the installed version is 3.4.0
16:52 Hoggins could this be related ?
16:53 semiosis maybe
16:53 semiosis have you unmounted/remounted the client since upgrade?
16:53 manik joined #gluster
16:54 Hoggins yes I did. Also performed a reboot
16:57 harish joined #gluster
16:59 Hoggins what I don't understand is why it does not "switch" on the other node when the first one goes down... did something change in 3.4 about this ?
17:04 awheele__ joined #gluster
17:04 recidive joined #gluster
17:05 SpacewalkN00b joined #gluster
17:07 Hoggins oh, and it does this whatever the node I bring down
17:09 semiosis Hoggins: ,,(pasteinfo)
17:09 glusterbot Hoggins: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:09 bulde joined #gluster
17:10 lpabon_ joined #gluster
17:10 Hoggins Here is the gluster volume info for this one : http://ur1.ca/excd1
17:10 glusterbot Title: #30414 Fedora Project Pastebin (at ur1.ca)
17:11 Hoggins also, volume heal info is nominal on both nodes
17:12 lpabon joined #gluster
17:14 tjikkun_work joined #gluster
17:14 thomasle_ joined #gluster
17:15 JoeJulian Does it do that when you bring down the client node?
17:16 _pol joined #gluster
17:16 Hoggins JoeJulian, I'm not sure I understand : is it the client that mounts the gluster filesystem you are talking about ?
17:17 JoeJulian I'm being facetious. You said, "it does this whatever the node I bring down". :P
17:17 Hoggins he he, indeed
17:17 Hoggins I have two server nodes, so it happens on both of these ones
17:24 JoeJulian I'm kind-of fastidious about the use of the word "node" since it's an ambiguous term meaning ANY endpoint. Even applying context, it can mean server, client, slave... whatever.
17:24 JoeJulian Hoggins: When you say it "freezes" is that freeze for the default 42 seconds?
17:27 manik joined #gluster
17:28 Hoggins JoeJulian, you're right. I should say "servers", because that's what I bring down to perform tests...
17:29 elyograg joined #gluster
17:32 Hoggins and yes, it hangs for those 42 seconds... actually, I'm surprised, as I'm convinced that bringing down one of the servers was absolutely transparent before the upgrade... could I be completely wrong ?
17:34 bulde joined #gluster
17:36 JoeJulian It is transparent if you shut down through the shutdown process which uses SIGTERM. If you interrupt the network connection or SIGKILL, the client won't know that the server is administratively down and will continue trying to maintain the connection for 42 seconds.
17:41 Hoggins okay, that makes sense... but why do I have the feeling that simply shutting down my server was transparent before ? Also, as I'm using systemd, shutting down the server should send SIGTERM to all its services, including glusterd... that's odd.
17:42 JoeJulian Yes, it should. Perhaps the network is getting stopped first? That sounds like a systemd service bug.
17:43 Kyreeth joined #gluster
17:43 Hoggins yes, I should probably look into this... I'm encrypting connexions between servers, and between servers and clients with IPSec. So according to what you say, I suspect that the link is broken before the service is properly shut down
17:44 JoeJulian That sounds likely.
17:48 Hoggins okay, it's confirmed : if I properly shut down the services, and then shut down the server, everything is fine
17:49 Hoggins now I just have to tweak the shutdown process to make it cleaner
17:49 Hoggins once again, thanks a lot for this precious help !
17:50 JoeJulian You're welcome. :D
17:50 Hoggins bye !
17:51 hagarth JoeJulian:  that was a good bit of troubleshooting!
17:52 awheeler joined #gluster
17:52 JoeJulian Thanks hagarth. Hoggins did all the hard work though. I just told him how it's supposed to work.
17:52 JoeJulian (which is my evil plan. Get people to solve their own problems!)
17:55 aknapp_ joined #gluster
17:56 hagarth JoeJulian: it is not a very easy thing to let people know where exactly to look for :)
18:01 trifonov joined #gluster
18:02 bulde joined #gluster
18:25 andreask joined #gluster
18:36 bulde joined #gluster
18:37 gkleiman joined #gluster
18:45 lpabon_ joined #gluster
18:59 SynchroM_ joined #gluster
19:02 nightwalk joined #gluster
19:13 daMaestro joined #gluster
19:20 _pol_ joined #gluster
19:27 Recruiter joined #gluster
19:33 awheeler joined #gluster
19:34 awheele__ joined #gluster
19:43 lpabon joined #gluster
19:46 lpabon joined #gluster
19:55 andreask joined #gluster
20:04 awheeler joined #gluster
20:07 spligak Is the read cache only primed on first read? Then that file is held in memory where? I would imagine server-side so all of the verification can still occur against all replicas?
20:11 awheele__ joined #gluster
20:15 spligak I guess also important to know - when do performance tuning settings take effect? On volume restart? Or on volume re-mount?
20:18 mattf joined #gluster
20:21 semiosis config changes (made with gluster volume set command) take effect immediately
20:21 Guest77900 joined #gluster
20:22 semiosis spligak: ^
20:22 spligak semiosis, good to know. though I will say when I unmounted a volume and re-mounted it with new volume settings which exceeded a local memory limitation (for cache), it failed to mount.
20:22 spligak does that make sense?
20:23 semiosis i understand what you said
20:23 statix_ joined #gluster
20:23 spligak it's not super important - it appeared to be a physical memory limitation I had overshot
20:24 semiosis are you running a benchmark with each change in config?  or just blindly changing things?
20:25 semiosis lots of people ask about tuning but very few (in my experience) actually run tests to see what benefits they get.
20:29 spligak semiosis, I run multiple, non-trivial sysbench and/or iozone benchmarks after every single change to generate enough data to make some sort of determination. The time I spend with a single benchmark varies. My knowledge of local file system tuning is healthy - trying to close the gap in my knowledge of distributed file systems.
20:30 spligak That's why I'm interested to know when the cache is primed. I wasn't seeing any change in performance, so I needed to verify my assumptions.
20:31 spligak The re-mount error caught me off guard, too. So I needed to back off that as well.
20:33 semiosis cool!  please share your results with us
20:37 spligak semiosis, I hope to. I've done a few entry-level presentations on local fs tuning as it applies to mysql variants and custom workflows: http://www.slideshare.net/jaso​najohnson/filesystems-24515657 -- might be interesting. a little lacking without me actually presenting it, however :)
20:37 aknapp joined #gluster
20:37 glusterbot <http://goo.gl/Gg10wf> (at www.slideshare.net)
20:38 spligak I'd love to do a similar write-up and presentation on gluster and ceph.
20:56 andreask on gluster 3.3.0 does a rebalance (without force) not move data to a brick that is already filled beyond the "cluster.min-free-disk"?
21:05 badone joined #gluster
21:11 MugginsM joined #gluster
21:20 JoeJulian andreask: Correct. Without force rebalance will not move a file from a less-full brick to a more-full one.
21:21 andreask JoeJulian: I see here a nearly full brick but a rebalance after adding local bricks only moves a few percent of files
21:22 JoeJulian Did you check the log?
21:30 chirino joined #gluster
21:37 andreask I will try this with force, when I have access to the systems again
22:00 tjstansell1 joined #gluster
22:03 tjstansell1 hey folks. i rebuilt a server in a 2 node replica set, preserving the previous uuid of the host ... after peer probe, it's stuck in the "Sent and Received peer request (Connected)" state.  glusterd didn't want to start up any of the bricks, so it must not have completed the peer probe well.  does anyone know how to get this into the "Peer in Cluster" state?
22:04 JoeJulian tjstansell1: check the glusterd logs. Probably something about mismatched volume definitions?
22:05 tjstansell1 when the system first came up, it didn't want to pull the volume config, so i ran gluster volume sync <peer> all to pull it...
22:05 JoeJulian well that should have worked. Try restarting both glusterd?
22:10 tjstansell1 doesn't seem to change anything.
22:12 tjstansell1 and i don't see anything in the logs that really stand out...
22:16 JoeJulian wanna share?
22:18 aknapp joined #gluster
22:18 tjstansell1 well, i ended up stopping glusterd on both nodes, editing the /var/lib/glusterd/peers/<uuid> and changing state=5 to state=3
22:18 tjstansell1 did that on both nodes and started gluster and now it's happy.
22:18 tjstansell1 started brick processes automatically and volume status says things are happy
22:19 tjstansell1 it's like it got stuck part way in the peer exchange and couldn't figure out how to start over.
22:19 tjstansell1 i'm willing to share logs if you want to see if you can figure out what might have happened...
22:20 chirino joined #gluster
22:20 JoeJulian Yes and no... I do have work to do...
22:22 tjstansell1 once i get things happy i'll be kickstarting the host again to see if i can replicate it and figure out how to address it ...
22:23 tjstansell1 all of this was set up and working with our 3.3.2 world and this is the first time i've done the whole kickstart after upgrading to 3.4.0 so maybe something is just "different"
22:34 fidevo joined #gluster
22:40 SynchroM joined #gluster
22:43 semiosis tjstansell1: so you "forced" the peers into a state... does gluster  command work now?  i.e. are the peers truly in a good state?
22:43 tjstansell1 they seem to be fine.
22:43 tjstansell1 heal commands work...
22:44 _pol joined #gluster
22:44 tjstansell1 volume info/volume status works.
22:44 semiosis when i run into that problem i just delete the file(s) from /var/lib/glusterd/peers on relevant hosts, restart glusterds, and try again
22:44 semiosis how about volume stop/start/set commands?
22:44 semiosis can you, for example, change the log level on a volume?  that's an innocuous test
22:44 tjstansell1 i'll try volume set ... since i don't want to stop any of it ...
22:44 semiosis clearly :)
22:46 tjstansell1 yeah, volume set worked from "rebuilt" peer and volume reset worked from other peer.
22:46 tjstansell1 so they seem fine
22:47 semiosis cool
22:47 tjstansell1 but you're saying preferred is to just nuke the /var/lib/glusterd/peers/<uuid> file ...
22:47 elyograg left #gluster
22:47 tjstansell1 and restart glusterd
22:47 tjstansell1 or stop/nuke/start
22:47 semiosis whenever possible, i prefer to delete a file that can be regenerated, instead of modifying it in place
22:48 semiosis but that's just me
22:48 tjstansell1 sure. but i don't understand what can be regenerated and what can't :)
22:48 tjstansell1 now i know
22:48 * semiosis followed the directions on the shampoo bottle, eventually ran out of shampoo
22:49 tjstansell1 until your cap gets plugged up :)
22:50 semiosis pretty much everything in /var/lib/glusterd can be regenerated -- as long as there are other servers in good state from which to fetch the info
22:50 semiosis the only thing that can't be regenerated is the glusterd.info (local server's UUID) file
22:50 chirino joined #gluster
22:51 semiosis that must be what the rest of the pool thinks it will be
22:51 semiosis as long as that is correct for the servers hostname then all the rest (vols, peers, ...) can be fetched from another peer
22:51 semiosis ,,(replace)
22:51 glusterbot Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/nIS6z ... or if replacement server has same hostname:
22:51 glusterbot http://goo.gl/rem8L
22:51 semiosis 2nd link is sorta related ^
22:52 tjstansell1 right. and that second one is what i used to base our kickstart scripts off of to get a node back into the cluster automatically
22:53 tjstansell1 which worked fine on 3.3.2... never had and issue.
22:54 glusterbot New news from resolvedglusterbugs: [Bug 810079] Handle failures of open fd migration from old graph to new graph <http://goo.gl/Fl7fh> || [Bug 767276] object-storage: using https for swift-plugin <http://goo.gl/azgheQ> || [Bug 768906] object-storage: swift-plugin rpms should copy configuration files under /etc/swift <http://goo.gl/C3RWd> || [Bug 768941] object-storage:parameter updation inside /etc/swift/swift.conf <http://goo.gl/vf9od> || [
22:59 semiosis tjstansell1: i have encountered strange problems following that procedure and when that happens i just start over
22:59 semiosis never have figured out why it doesnt work ~10% of the time
23:00 tjstansell1 in my script i do the  peer probe and then restart glusterd.  i was wondering if in this particular case it restarted glusterd before the probe completed fully, so it was in a half-way state.
23:01 edoceo joined #gluster
23:01 edoceo I'm using 3.2.5 and seeing a lot of this in my logs:
23:01 edoceo 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1010)
23:02 edoceo Comes from /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
23:02 edoceo Could it just be a CLI disconnecting?
23:03 JoeJulian Is the cli disconnecting a lot?
23:04 JoeJulian I mean it should be pretty easy to figure out of that happens when you use the cli...
23:08 edoceo It does seem to happen everytime I run a `gluster....` command - like checking rebalance status
23:12 _pol joined #gluster
23:22 chirino joined #gluster
23:24 awheeler joined #gluster
23:38 The_Ugster joined #gluster
23:38 jebba joined #gluster
23:49 _pol joined #gluster
23:50 chirino joined #gluster
23:50 Guest77900 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary