Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:18 bala joined #gluster
00:21 calisto joined #gluster
00:28 firemanxbr joined #gluster
00:34 msmith_ joined #gluster
00:37 plarsen joined #gluster
00:43 kshlm joined #gluster
00:52 justinmburrous joined #gluster
00:54 David_H__ joined #gluster
01:29 overclk joined #gluster
01:30 sputnik13 joined #gluster
01:39 haomaiwa_ joined #gluster
01:41 msmith_ joined #gluster
01:56 haomai___ joined #gluster
01:58 haomaiwa_ joined #gluster
01:59 bala joined #gluster
02:01 haomai___ joined #gluster
02:05 haomaiwa_ joined #gluster
02:16 haomaiwang joined #gluster
02:16 glusterbot New news from newglusterbugs: [Bug 1149943] duplicate librsync code should likely be linked removed and linked as a library <https://bugzilla.redhat.com/show_bug.cgi?id=1149943>
02:17 haomaiw__ joined #gluster
02:35 sputnik13 joined #gluster
02:40 David_H_Smith joined #gluster
02:44 David_H__ joined #gluster
02:48 fyxim_ joined #gluster
02:48 stigchri1tian joined #gluster
02:49 SteveCoo1ing joined #gluster
02:49 vincent_1dk joined #gluster
02:49 marcoceppi_ joined #gluster
02:50 purpleid1a joined #gluster
02:50 delhage_ joined #gluster
02:50 delhage_ joined #gluster
02:51 ndevos_ joined #gluster
02:51 crashmag_ joined #gluster
02:51 semiosis_ joined #gluster
02:52 RobertLaptop_ joined #gluster
02:53 portante_ joined #gluster
02:53 johndescs_ joined #gluster
02:54 semiosis joined #gluster
02:54 Gib_adm joined #gluster
02:56 nishanth joined #gluster
02:57 sadbox joined #gluster
03:00 schrodinger joined #gluster
03:01 justinmburrous joined #gluster
03:02 asku joined #gluster
03:02 ninjabox joined #gluster
03:06 DV joined #gluster
03:06 ctria joined #gluster
03:08 sputnik13 joined #gluster
03:09 ultrabizweb joined #gluster
03:12 David_H_Smith joined #gluster
03:13 johnnytran joined #gluster
03:25 overclk joined #gluster
03:32 bharata-rao joined #gluster
03:32 Diddi joined #gluster
03:45 Telsin joined #gluster
03:56 haomaiwa_ joined #gluster
04:02 haomai___ joined #gluster
04:09 spandit joined #gluster
04:12 nbalachandran joined #gluster
04:13 David_H_Smith joined #gluster
04:13 sputnik13 joined #gluster
04:14 David_H_Smith joined #gluster
04:15 kdhananjay joined #gluster
04:17 nbalachandran joined #gluster
04:28 calisto joined #gluster
04:34 harish_ joined #gluster
04:52 ACiDGRiM joined #gluster
04:52 ramteid joined #gluster
05:06 justinmburrous joined #gluster
05:15 David_H_Smith joined #gluster
05:16 David_H_Smith joined #gluster
05:25 David_H_Smith joined #gluster
05:26 David_H_Smith joined #gluster
05:29 XpineX joined #gluster
05:40 fattaneh joined #gluster
05:40 georgeh_ joined #gluster
05:46 ramteid joined #gluster
05:46 SteveCoo1ing joined #gluster
05:46 guntha_ joined #gluster
05:46 social joined #gluster
05:46 msvbhat joined #gluster
05:46 tty00 joined #gluster
05:46 frb joined #gluster
05:46 edwardm61 joined #gluster
05:46 TheFlyingCorpse joined #gluster
05:46 kodapa joined #gluster
05:46 foster joined #gluster
05:46 T0aD joined #gluster
05:46 partner joined #gluster
05:46 atrius joined #gluster
05:46 mikedep333 joined #gluster
05:46 ron-slc joined #gluster
05:46 cyber_si joined #gluster
05:46 ndk_ joined #gluster
05:46 mdavidson joined #gluster
05:46 jiqiren joined #gluster
05:46 bala joined #gluster
05:47 SOLDIERz joined #gluster
05:47 SOLDIERz Hello everyone got anyone experiences with this Puppet-Module for GlusterFS https://github.com/purpleidea/puppet-gluster/ ?
05:47 glusterbot Title: purpleidea/puppet-gluster · GitHub (at github.com)
05:48 SOLDIERz If yes how did you implement it the examples given in this repo are not working for me
05:51 justinmburrous joined #gluster
05:53 nshaikh joined #gluster
06:02 kumar joined #gluster
06:03 haomaiwa_ joined #gluster
06:09 haomai___ joined #gluster
06:09 mikedep333 joined #gluster
06:18 sputnik13 joined #gluster
06:24 kevein joined #gluster
06:32 Fen2 joined #gluster
06:32 Fen2 Hi all :)
06:48 haomaiwang joined #gluster
06:49 rolfb joined #gluster
06:51 fattaneh joined #gluster
06:54 desmond joined #gluster
06:56 ctria joined #gluster
07:03 fattaneh left #gluster
07:04 haomaiw__ joined #gluster
07:08 Fen2 joined #gluster
07:15 firemanxbr joined #gluster
07:26 calum_ joined #gluster
07:39 ninkotech joined #gluster
07:39 ninkotech_ joined #gluster
07:41 anands joined #gluster
07:46 dguettes joined #gluster
07:47 firemanxbr joined #gluster
07:58 justinmburrous joined #gluster
08:00 Slydder joined #gluster
08:02 Slydder why does a fuse mount just slow to a crawl under load? very annoying. can't use a local nfs mount due to deadlock issues and fuse is just slow as hell.
08:16 XpineX_ joined #gluster
08:21 justinmburrous joined #gluster
08:29 jvandewege joined #gluster
08:33 kevein joined #gluster
08:45 jvandewege joined #gluster
08:49 Fen2 do you use xfs ?
08:52 justinmburrous joined #gluster
08:57 Slydder on the remote server
08:58 Slydder 1 brick x 2 nodes across 1GB line the problem is the fuse write locally. once it hits the wire it goes fast enough.
08:59 Slydder was also reading through BZ regarding direct-io-mode of fuse and am still in the dark as to the current status of direct-io-mode when mounting using fuse. in older versions you could control this setting but not any more it would seem.
09:23 davidhadas_ joined #gluster
09:42 msciciel joined #gluster
09:42 LebedevRI joined #gluster
09:43 Pupeno joined #gluster
09:53 justinmburrous joined #gluster
09:54 gildub joined #gluster
10:01 bala1 joined #gluster
10:03 nbalachandran joined #gluster
10:21 vincent_vdk joined #gluster
10:29 edward1 joined #gluster
10:44 dguettes joined #gluster
10:46 jvandewege joined #gluster
10:53 justinmburrous joined #gluster
11:06 Slydder (ganesha)
11:07 Slydder glusterbot: help
11:07 glusterbot Slydder: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.
11:07 Slydder glusterbot: list
11:07 glusterbot Slydder: Admin, Alias, Anonymous, Bugzilla, Channel, ChannelStats, Conditional, Config, Dict, Factoids, Google, Herald, Karma, Later, MessageParser, Misc, Network, NickCapture, Note, Owner, Plugin, PluginDownloader, RSS, Reply, Seen, Services, String, Topic, Trigger, URL, User, Utilities, and Web
11:11 virusuy joined #gluster
11:11 kkeithley1 joined #gluster
11:29 hybrid512 joined #gluster
11:30 mojibake joined #gluster
11:30 hybrid512 joined #gluster
11:30 mojibake joined #gluster
11:43 Slashman joined #gluster
11:46 plarsen joined #gluster
11:54 justinmburrous joined #gluster
11:54 pkoro joined #gluster
11:55 ira joined #gluster
11:57 B21956 joined #gluster
12:01 Fen1 joined #gluster
12:05 jdarcy joined #gluster
12:05 B21956 joined #gluster
12:06 B21956 joined #gluster
12:09 B21956 joined #gluster
12:14 calisto joined #gluster
12:16 B21956 joined #gluster
12:21 B21956 joined #gluster
12:23 B21956 joined #gluster
12:23 edv joined #gluster
12:30 _Bryan_ joined #gluster
12:33 B21956 joined #gluster
12:40 liquidat joined #gluster
12:48 glusterbot New news from newglusterbugs: [Bug 1127140] memory leak <https://bugzilla.redhat.com/show_bug.cgi?id=1127140> || [Bug 1133073] High memory usage by glusterfs processes <https://bugzilla.redhat.com/show_bug.cgi?id=1133073> || [Bug 1140818] symlink changes to directory, that reappears on removal <https://bugzilla.redhat.com/show_bug.cgi?id=1140818>
12:55 justinmburrous joined #gluster
13:02 coredump joined #gluster
13:06 bennyturns joined #gluster
13:13 glusterbot New news from resolvedglusterbugs: [Bug 977543] RDMA Start/Stop Volume not Reliable <https://bugzilla.redhat.com/show_bug.cgi?id=977543>
13:15 clutchk joined #gluster
13:18 julim joined #gluster
13:18 theron joined #gluster
13:19 glusterbot New news from newglusterbugs: [Bug 919286] Efficiency of system calls by posix translator needs review <https://bugzilla.redhat.com/show_bug.cgi?id=919286>
13:36 nshaikh joined #gluster
13:42 jobewan joined #gluster
13:44 glusterbot New news from resolvedglusterbugs: [Bug 977548] RDMA Mounting Volumes not Functioning for PPA Build <https://bugzilla.redhat.com/show_bug.cgi?id=977548> || [Bug 982757] RMDA Volumes Silently Revert to TCP <https://bugzilla.redhat.com/show_bug.cgi?id=982757> || [Bug 985424] Gluster 3.4.0 RDMA stops working with more then a small handful of nodes <https://bugzilla.redhat.com/show_bug.cgi?id=985424>
13:44 theron joined #gluster
13:47 msmith_ joined #gluster
13:49 calum_ joined #gluster
13:56 justinmburrous joined #gluster
13:57 Pupeno joined #gluster
14:02 tdasilva joined #gluster
14:15 glusterbot New news from resolvedglusterbugs: [Bug 978030] Qemu libgfapi support broken for GlusterBD integration <https://bugzilla.redhat.com/show_bug.cgi?id=978030> || [Bug 1030098] Development files not packaged for debian/ubuntu <https://bugzilla.redhat.com/show_bug.cgi?id=1030098>
14:19 glusterbot New news from newglusterbugs: [Bug 1044648] Documentation bug for glfs_set_volfile_server <https://bugzilla.redhat.com/show_bug.cgi?id=1044648> || [Bug 1152617] Documentation bug for glfs_set_volfile_server <https://bugzilla.redhat.com/show_bug.cgi?id=1152617>
14:19 jbautista- joined #gluster
14:29 calisto joined #gluster
14:30 _Bryan_ joined #gluster
14:31 pkoro_ joined #gluster
14:38 davemc joined #gluster
14:45 glusterbot New news from resolvedglusterbugs: [Bug 1068781] glfs_read fails for large read <https://bugzilla.redhat.com/show_bug.cgi?id=1068781>
14:46 firemanxbr joined #gluster
14:47 sac`away joined #gluster
14:49 prasanth|brb joined #gluster
14:49 nshaikh joined #gluster
14:54 hawksfan joined #gluster
14:54 fubada joined #gluster
15:00 Fen1 glusterbot: are you a bot ?
15:01 lpabon joined #gluster
15:01 hawksfan i have a distributed volume (replicas=1) with one brick that has crashed (raid failure)
15:02 hawksfan i know i'm going to need to re-create the raid config and file system - what do i need to do from the gluster side?
15:03 Fen1 hawksfan: recreate the volume and maybe choose a distributed-replicated
15:03 hawksfan is there anything i need to do prior to the raid/mkfs tasks
15:04 hawksfan do i really need to recreate the volume?
15:04 Fen1 are you data erased ?
15:05 Fen1 *your
15:05 hawksfan just on that one brick - i'm not worried about the data loss
15:06 Fen1 So if you have only one brick and datas on it are erased, maybe yes
15:06 hawksfan this is 1 of 9 bricks - the other 8 are fine
15:06 Fen1 hum ok you have 9 bricks
15:07 Fen1 i don't think so because i read nothing about that
15:08 hawksfan ls on mounted volume appears fine
15:08 Fen1 df -h too ?
15:09 hawksfan from what i understood when creating the volume to begin with, if a brick in a distributed volume goes offline, just that data is lost
15:10 hawksfan df -h shows size as 151T (the full size including my failed brick)
15:10 Fen1 i think too, other brick are in other raid disk ?
15:11 hawksfan yes - 9 servers with 1 brick each
15:11 Fen1 cause you did 1 disk = 1 brick ? or 1 disk = n brick ?
15:14 Fen1 if there is just one partition by disk (1 disk = 1 brick), your other brick are fine and you can recreate your raid disk
15:15 hawksfan 9 physical servers, each with 1 RAID5 volume, one filesystem, one brick
15:15 hawksfan the data RAID is separate from my OS, thankfully
15:16 hawksfan do i need to remove-brick before recreating raid disk?
15:16 firemanxbr joined #gluster
15:17 Fen1 if i would be you, yes. But i'm not an glusterfs expert
15:17 firemanxbr joined #gluster
15:18 and` hi, is anyone aware of issues when upgrading glusterfs after the recent RHEL 6.6 upgrade? http://fpaste.org/141798/99910141/raw/
15:19 and` seems a package from the RHS suite was introduced into the rhel-6-server repository?
15:19 and` which breaks when the machine had installed glusterfs from the community repository
15:24 doo joined #gluster
15:32 ctria joined #gluster
15:38 kkeithley_ and`: you should not mix RHS and community glusterfs on the same machine. When you're using community gluster I recommend adding an exclude statement to /etc/yum.repos.d/redhat.repos to preclude getting the RHS client-side bits.
15:43 xavih joined #gluster
15:44 pkoro_ joined #gluster
15:44 theron joined #gluster
15:45 and` kkeithley_, but weren't RHS packages living on their own repository with their own subscription?
15:46 and` rhn-channel reports: rhel-x86_64-server-6 rhel-x86_64-server-optional-6 rhn-tools-rhel-x86_64-server-6
15:47 and` kkeithley_, before the RHEL 6.6 update we weren't able to install RHS packages at all, that's why I feel something is not behaving properly
15:57 justinmburrous joined #gluster
15:59 kshlm joined #gluster
16:05 dberry joined #gluster
16:05 dberry joined #gluster
16:06 lmickh joined #gluster
16:10 rotbeard joined #gluster
16:10 georgeh joined #gluster
16:13 fattaneh joined #gluster
16:20 sputnik13 joined #gluster
16:33 and` kkeithley_, what makes it even harder is the fact rhel-$arch-server-6 is taken from RHN, so no good way to exclude those packages : /
16:33 dtrainor joined #gluster
16:34 firemanxbr joined #gluster
16:42 bene joined #gluster
16:51 jobewan joined #gluster
16:58 soumya joined #gluster
16:58 justinmburrous joined #gluster
16:59 fattaneh joined #gluster
17:12 JoeJulian and`: The RHS *server* packages were, the *client* was included in RHEL since any client can connect to a storage server without paying additional license (as I understand it as a non-RH customer)
17:13 JoeJulian Can't you exclude packages through your customer management interface at the Red Hat web site?
17:14 and` JoeJulian, is that done by removing the relevant packages from the one RHN should be managing?
17:15 JoeJulian I've never used it, so I'm not really sure. That was my understanding though.
17:15 JoeJulian I just figured there was some checkbox you could uncheck.
17:17 and` JoeJulian, I'll give a look, thanks! did you spot the client's inclusion from the yum logs or do you have any other source of information to point me to?
17:18 JoeJulian mojibake: "In regards to issue I had yesterday that you chime in on regarding Apache calling an OOM and it ended up killing gluster client." (I don't answer offline in most instances) The kernel is what kills processes when the kernel runs out of memory, not apache. Just happened to pick gluster to kill, probably because it was using the most ram.
17:18 JoeJulian and`: Just experience.
17:19 JoeJulian and`: I hang out here all the time. When the client started getting included in RHEL, it was noticed pretty quickly. :D
17:20 and` JoeJulian, I'm clueless on why I spotted it today, this looked to me as first happening on RHEL 6.6
17:20 and` but I might be surely wrong, the system in question didn't send me anything related to yum breakages
17:21 Slydder joined #gluster
17:22 TPU joined #gluster
17:22 TPU hi, is there any way to target the self heal daemon with cgroups, so that i can control self heal io without affecting normal operation ?
17:23 mojibake JoeJulian: JoeJulian++
17:23 glusterbot mojibake: JoeJulian's karma is now 14
17:23 lalatenduM joined #gluster
17:24 JoeJulian mojibake: bug 1127140
17:24 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1127140 unspecified, unspecified, 3.4.6, kkeithle, ASSIGNED , memory leak
17:25 mojibake JoeJulian: After raising the instance size and continue testing, it looks like t2.small web server is just about the limit that some light load testing will handle. Raising the instance size looks like Memory will hover around 1gb for gluster client and httpd processes.
17:25 cfeller JoeJulian I'm having the same problem - the RHS client packages were on their own channel.
17:26 cfeller The workaround should be simple enough, use the yum-plugin-priorities package and then add a weight to the glusterfs.repo file in /etc/yum.repos.d.  That will give the community packages a higher weight.
17:27 cfeller nevertheless, I think the changes in RHEL 6.6 blindsided a few people.
17:27 charta joined #gluster
17:29 theron joined #gluster
17:29 Gorian so... I have a gluster server running 3.4 and one running 3.5, and they refuse to cluster together... any work around?
17:29 semiosis upgreyedd
17:30 Gorian right. So, manually build gluster for ubuntu. Would it keep all my settings etc>
17:30 semiosis what do you mean manually build?  you can use the ,,(ppa) packages
17:31 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/M9CXF8 -- 3.5 stable: http://goo.gl/6HBwKh -- QEMU with GlusterFS support: http://goo.gl/e8IHnQ (3.4) & http://goo.gl/tIJziO (3.5)
17:31 semiosis settings should be fine, although i'd make a backup just in case
17:31 Gorian oh, so they do have 3.5 packages? just not in the repo huh? odd
17:31 Gorian ugh. The whole point of adding it to the cluster is to back it all up on the second node so I can reinstall the first...
17:32 fattaneh1 joined #gluster
17:33 JoeJulian cfeller: I totally agree with the blindsided statement. Unfortunately, RHEL is downstream and that complaint would need to be taken up with your vendor.
17:35 calisto joined #gluster
17:35 and` JoeJulian, FYI http://supercolony.gluster.org/pipermail/gluster-users/2014-October/019095.html
17:36 glusterbot Title: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ? (at supercolony.gluster.org)
17:37 JoeJulian I saw it way back when I upgraded to CentOS 6.6. Since that's all yum, though, like cfeller said I just used package priorities (always have actually) so I didn't actually notice until someone mentioned it here.
17:39 Gorian so, updated gluster on ubuntu (thanks)
17:39 Gorian but now, I try to add it to the cluster, and it says that it's has the other server as a peer, but it's diconnected - so I can't the second server as a brick to the volume
17:39 Gorian but the second node it clearly online
17:42 Gorian http://i.imgur.com/7PN4AXy.png
17:43 JoeJulian Gorian: Restart both glusterd and see if that helps.
17:43 Gorian mmmk
17:44 lpabon joined #gluster
17:44 Gorian nope, didn't
17:45 JoeJulian So what's the peer status state between the two?
17:45 Gorian same as before
17:45 semiosis has one of the servers changed IP?
17:45 Gorian one says disconnected, one says connected
17:46 semiosis the one which says connected
17:46 Gorian no. I removed server 2 (as ubuntu) from the cluster
17:46 Gorian reimaged it with centos, installed glusterfs, added it back
17:46 Gorian is there something I forgot to do when removing it that it might be caching something stopping it from connecting because it changed?
17:48 Gorian ah
17:48 Gorian if I try to add node 2 from node 1, I get
17:48 Gorian "peer probe: failed: Probe returned with unknown errno 107"
17:49 Gorian heh. Iptables issue
17:49 Gorian disabled iptables and it worked
17:49 semiosis nice
17:49 Gorian yeah >.< haha, thanks
17:52 JoeJulian @ports
17:52 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
17:55 doo joined #gluster
17:55 side_con1rol joined #gluster
17:56 theron joined #gluster
17:59 justinmburrous joined #gluster
18:01 side_control joined #gluster
18:08 doo joined #gluster
18:12 nshaikh joined #gluster
18:14 doo joined #gluster
18:17 Slydder hey all
18:18 hawksfan joined #gluster
18:19 Slydder am currently building ganesha (which has gfs support). I installed gluster using the debian packages and am in need of glusterfs/api/glfs-handles.h which does not seem to be in the any of the packages and there doesn't seem to be a .dev package. any ideas?
18:28 semiosis Slydder: i'll add it to the -common package
18:29 semiosis but for now you could just copy that file into /usr/include/glusterfs/api
18:29 mojibake Is the Gluster NFS exported and running by default?
18:29 mojibake Meaning if glusterd is running do I need to do anything else?
18:29 semiosis yes, see also ,,(nfs)
18:29 glusterbot To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
18:30 mojibake glusterbot++
18:30 glusterbot mojibake: glusterbot's karma is now 8
18:30 Slydder semiosis: that's what i did. if all goes good I can use localhost nfs mounts and get faster access than using fuse.
18:31 mrobbert_ joined #gluster
18:32 semiosis Slydder: very interesting.  keep me posted please!
18:32 Slydder will do
18:32 plarsen joined #gluster
18:32 Slydder debian package is building now.
18:32 JoeJulian Did they ever fix the deadlock issue with mounting nfs locally?
18:32 Slydder nope
18:32 semiosis JoeJulian: ,,(localhost nfs)
18:32 glusterbot JoeJulian: http://lwn.net/Articles/595652/
18:33 mrobbert_ left #gluster
18:33 Slydder that's why I'm doing userland. at least this way the server doesn't deadlock.
18:38 semiosis Slydder: userland nfs client?
18:38 Slydder no. nfs server
18:38 semiosis but gluster-nfs is already userland
18:38 charta joined #gluster
18:38 semiosis will be most interesting to see how that works!
18:41 johnmark joined #gluster
18:42 davemc joined #gluster
18:46 zerick joined #gluster
18:48 doo joined #gluster
18:55 mrEriksson joined #gluster
19:01 gildub joined #gluster
19:16 firemanxbr joined #gluster
19:22 B21956 joined #gluster
19:38 Pupeno_ joined #gluster
19:42 DV joined #gluster
19:47 side_control joined #gluster
19:52 calisto joined #gluster
20:15 B21956 joined #gluster
20:16 semiosis @puppet
20:16 glusterbot semiosis: https://github.com/purpleidea/puppet-gluster
20:18 theron joined #gluster
20:19 semiosis purpleidea: does it work without a puppetmaster?
20:20 purpleidea semiosis: want to send your question in the form of a patch to my faq?
20:20 purpleidea semiosis: short answer, yes
20:20 semiosis :)
20:20 semiosis sure
20:21 theron joined #gluster
20:23 semiosis https://github.com/purpleidea/puppet-gluster/pull/20
20:24 glusterbot Title: Update to FAQ by semiosis · Pull Request #20 · purpleidea/puppet-gluster · GitHub (at github.com)
20:24 purpleidea semiosis: sweet thanks... will reply shortly (with longer answer...) btw do you like the idea for Q&A ?
20:25 semiosis indeed
20:25 semiosis github even did all the forking &c for me
20:25 semiosis pretty easy
20:26 dtrainor joined #gluster
20:26 purpleidea "&c?" ?
20:26 semiosis et cetera
20:26 purpleidea ah
20:30 dtrainor joined #gluster
20:31 dtrainor joined #gluster
20:32 purpleidea semiosis: almost done
20:33 dtrainor joined #gluster
20:33 Slydder semiosis: cool man. disabled nfs in gluster started ganesha then did a loopback nfs mount and bingo the volume is there. tomorrow will be testing for deadlock.
20:34 semiosis wow, nice!
20:35 semiosis ganesha can do pnfs too right?
20:35 Slydder of course you have to setup ganesha specifically for each volume you want to export but it's not that hard.
20:35 Slydder correct. vers=4.1
20:35 theron_ joined #gluster
20:35 semiosis sweet
20:35 Gorian odd. Any reason that "gluster peer status" would be using an IP instead of the hostname when It can resolve the hostname?
20:35 semiosis Gorian: ,,(hostnames)
20:35 glusterbot Gorian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
20:36 Gorian I like glusterbot :)
20:36 Gorian thanks by the way
20:36 * glusterbot likes you too
20:39 Gorian <3
20:40 purpleidea semiosis: patch sent for review :)
20:42 mrobbert__ joined #gluster
20:43 davemc joined #gluster
20:44 mrobbert__ left #gluster
20:45 semiosis purpleidea: s/loose/lose/
20:46 purpleidea semiosis: nice catch, rebased, pushed
21:00 Slydder semiosis: just checked for pnfs support for gluster and it's not in yet. ceph, gpfs, vfs all have pnfs support. will see what can be done though. at least I have assurance that the deadlock will not happen due to ganesha.
21:00 semiosis how do you get that assurance?
21:01 justinmburrous joined #gluster
21:02 Slydder am talking to one of the devs. they have tested extensively for it.
21:02 semiosis great
21:03 Slydder nice to know that have confidence in the project. however, I will celebrate once I test against my 20+ GB transfer.
21:03 Slydder ;)
21:03 Slydder s/that/they/
21:03 glusterbot What Slydder meant to say was: nice to know they have confidence in the project. however, I will celebrate once I test against my 20+ GB transfer.
21:07 semiosis Slydder: why nfs in the first place?  for a write heavy workload like your rsync i'd think fuse would be the better choice
21:07 semiosis do you have a faster server-server network than your client-server connection?
21:07 Slydder fuse works great the first time around. however, when overwriting existing data fuse hangs.
21:07 semiosis wow
21:08 semiosis what version of glusterfs?
21:08 semiosis that sounds like a bug
21:08 Slydder it's a known problem with fuse that they can't seem to get rid of.
21:08 Slydder 3.5.2
21:08 JoeJulian Has that been tested with --inplace?
21:08 Slydder the problem is when you rsync to a fuse mount with a few thousand updates the problem just gets worse.
21:08 Slydder jepp
21:08 JoeJulian because it sounds like it has to do with the renaming + copying.
21:09 Xanacas joined #gluster
21:09 Slydder nope
21:09 semiosis what distro?
21:09 Slydder have tried with and without inplace
21:09 Slydder debian
21:09 Slydder wheezy
21:09 semiosis wow
21:09 semiosis just wow
21:09 Slydder nfs doesn't have this problem but it does have the deadlock problem when loopback mounting.
21:10 Slydder the fact that nfs works as it should (except for the deadlock) shows that gluster is not the problem.
21:10 JoeJulian No
21:11 JoeJulian The fact that FSCache works shows that avoiding the network sync avoids the problem.
21:11 JoeJulian probably
21:12 semiosis can you get an strace of the glusterfs fuse client when it hangs?
21:12 semiosis that might be helpful
21:12 semiosis also, is there an open bug in glusterfs about this fuse client problem?
21:12 semiosis you said it was a known issue... got a link?
21:13 Slydder god. gotta go searching again. grrrr.
21:14 Slydder I'll see if I can't find the report again. I haven't filed anything regarding this issue though.
21:15 Slydder I could to a strace tomorrow I guess. shouldn't be too difficult. it hangs about 15 seconds into the transfer.
21:15 nated left #gluster
21:18 nothau joined #gluster
21:25 _nothau joined #gluster
21:26 jbrooks joined #gluster
21:27 Gorian so, it says that nfs is off, and I can't seem to enable it?
21:27 Gorian I tried "gluster volume volnam set nfs.enable false" but it didn't change anything...
21:27 nothau joined #gluster
21:27 Gorian err.. nfs.disable false
21:30 dtrainor joined #gluster
21:31 Jamoflaw joined #gluster
21:32 Slydder Gorian: nfs.disable off
21:32 JoeJulian Gorian: Which "it"
21:32 _nothau joined #gluster
21:32 JoeJulian also, gluster volume reset nfs.disable
21:33 kkeithley_ left #gluster
21:33 Slydder or what JoeJulian said.
21:33 Jamoflaw JoeJulian: is the reset command valid for any of the options out of interest?
21:33 Gorian weird
21:34 Gorian I just restarted glusterd on both hosts and when they came back up nfs was enabled :p
21:34 Slydder nfs is enabled per default
21:34 JoeJulian Jamoflaw: yes
21:35 Gorian thought is what I though, which is why i thought it odd it wasn't on
21:35 JoeJulian Jamoflaw: Though, I think there's a bug wrt log levels where if you change them and reset, they don't revert.
21:35 Slydder the only times I have seen that nfs doesn't start is when you bind to an address other than localhost or both nodes are not started.
21:35 dtrainor joined #gluster
21:36 JoeJulian Could be a bug where if nfs is disabled on all volumes it doesn't start when you change that setting.
21:36 Slydder if you bind to another address then you have to start the nfs server per hand. which is a pain in the butt.
21:36 JoeJulian I can see that as a possibility.
21:36 Jamoflaw Kk, not played much with the tuning options yet! Got a lot of testing to do first
21:37 Jamoflaw does anyone here use geo replication at scale? Ie in production with large vols
21:57 longshot902 joined #gluster
22:09 firemanxbr joined #gluster
22:13 plarsen joined #gluster
22:18 coredump joined #gluster
22:34 rotbeard joined #gluster
22:50 rjoseph joined #gluster
23:03 justinmburrous joined #gluster
23:13 Pupeno joined #gluster
23:15 davemc joined #gluster
23:26 calisto joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary