Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 gildub joined #gluster
00:24 saltsa joined #gluster
00:24 diegows joined #gluster
00:43 daxatlas joined #gluster
00:53 plarsen joined #gluster
01:21 green_man joined #gluster
01:26 sputnik13 joined #gluster
02:28 sputnik13 joined #gluster
02:34 gildub joined #gluster
03:05 nishanth joined #gluster
03:11 bharata-rao joined #gluster
03:15 necrogami joined #gluster
03:22 dmyers joined #gluster
03:27 hagarth joined #gluster
03:30 recidive joined #gluster
03:34 daxatlas joined #gluster
03:39 kanagaraj joined #gluster
03:41 necrogami joined #gluster
03:41 necrogami joined #gluster
03:48 hchiramm_ joined #gluster
03:49 itisravi joined #gluster
04:02 harish joined #gluster
04:03 dmyers joined #gluster
04:09 kumar joined #gluster
04:17 daxatlas joined #gluster
04:26 RameshN joined #gluster
04:26 saurabh joined #gluster
04:36 anoopcs joined #gluster
04:40 spandit joined #gluster
04:44 rafi1 joined #gluster
04:46 shubhendu_ joined #gluster
04:49 nbalachandran joined #gluster
04:52 ramteid joined #gluster
04:52 atinmu joined #gluster
04:52 bala joined #gluster
05:03 meghanam joined #gluster
05:03 meghanam_ joined #gluster
05:04 Guest7984 joined #gluster
05:13 hagarth joined #gluster
05:21 Philambdo joined #gluster
05:23 kdhananjay joined #gluster
05:32 glusterbot New news from newglusterbugs: [Bug 1139103] DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139103>
05:32 dmyers gothos: you on?
05:37 ppai joined #gluster
05:40 rafi2 joined #gluster
05:47 deepakcs joined #gluster
05:48 hchiramm joined #gluster
05:48 lalatenduM joined #gluster
05:52 gothos dmyers: yep
05:59 ctria joined #gluster
06:00 LebedevRI joined #gluster
06:06 soumya joined #gluster
06:08 aravindavk joined #gluster
06:09 nshaikh joined #gluster
06:15 jiffin joined #gluster
06:20 rgustafs joined #gluster
06:23 jtux joined #gluster
06:23 atalur joined #gluster
06:23 spandit joined #gluster
06:38 RaSTar joined #gluster
06:39 qdk joined #gluster
06:52 Lilian joined #gluster
06:56 mbukatov joined #gluster
07:12 hybrid512 joined #gluster
07:33 fsimonce joined #gluster
07:35 Fen1 joined #gluster
07:36 hchiramm JustinClift,ping
07:36 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
07:40 ricky-ti1 joined #gluster
07:46 deepakcs joined #gluster
07:59 Fenn joined #gluster
07:59 Norky joined #gluster
08:01 Fenn Hi, is anyone here ?
08:01 side_control nope
08:01 liquidat joined #gluster
08:02 Fenn I have some questions about glusterfs...
08:02 side_control shoot, dont ask to ask, just ask ;)
08:02 Fenn thx
08:06 Fenn I don't understand the concept about "geo-replication", i read the administration guide but it's a little bit confuse, can you shortly explain me ?
08:07 Fenn The replication, Is it between server or cluster ?
08:09 side_control server
08:09 side_control its just like a "cluster" but you have to define that it is geo-replicated so it handles the high latency
08:09 dockbram joined #gluster
08:10 Fenn ok thx, and this feature is it also in the softwar Ceph ? :S
08:11 Fenn or this is unique for Glusterfs ?
08:11 side_control i havent even looked at ceph yet
08:11 Fenn ok thx for all ;)
08:12 Pupeno joined #gluster
08:15 Fenn And i would like to know what kind of hardware i must buy to use glusterfs with a lot of ProxMox VM ?
08:23 ekuric joined #gluster
08:23 meghanam joined #gluster
08:23 Kalonji joined #gluster
08:23 meghanam_ joined #gluster
08:24 JoeJulian joined #gluster
08:28 ProT-0-TypE joined #gluster
08:29 Fenn You don't know  ? :/
08:36 andreask joined #gluster
08:36 giannello joined #gluster
08:38 jvandewege Got a question from a collegue about renaming gluster volumes. Looks like the manpage says that is possible but when entering the command it says can't do that. Has gluster volume rename been renamed to something else or is it not possible anymore??
08:40 Fenn left #gluster
08:49 lkoranda joined #gluster
08:51 Slashman joined #gluster
09:01 Mick joined #gluster
09:06 nishanth joined #gluster
09:07 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
09:08 FenTry joined #gluster
09:10 Kalonji hello everyone, nobody has a problem of iowait with Gluster? I have my load average goes up to 70 because of iowait !!!
09:11 foobar Kalonji: yuh... sounds familiar...
09:11 foobar got a self-heal-daemon running ?
09:11 Kalonji nop ! :(
09:12 Kalonji i have 2 gluster server ( one is primary and second is the standby )
09:12 Kalonji client is connected by nfs to the primary
09:13 Kalonji but it's the standby have the lot of iowait
09:14 Kalonji and the load avegerage grow up to 70 !!!
09:14 Kalonji :s
09:14 foobar in gluster both are always active ... at least for writes
09:15 Kalonji yes i know but i use a vip on one node :)
09:16 Kalonji it's very strange friday between 9 am to 4 pm it dont have any probleme
09:16 Kalonji and at 4pm the load grow up
09:16 hagarth jvandewege: the man page is incorrect
09:16 Kalonji and the only solution is to stop the service gluster
09:18 Kalonji no one has ever had this kind of problem ???
09:19 jvandewege Kalonji: you said stop the service gluster, and then what, rename the volume config files on disk?
09:21 ricky-ti1 joined #gluster
09:21 foobar Kalonji: i've only seen it when self-healing is running... which will trigger loads > 50-80 on my boxes
09:21 Kalonji nop only stop the service after i'm waiting the load go down
09:21 foobar for hours...
09:23 Kalonji and when the sel-healing is running it's normal to wait 1minute to list a directory ?
09:23 Kalonji :(
09:24 foobar I've waited minutes...
09:24 foobar at that time, the app we were running tended to do dir-listings before reads... to clear cashes... which caused the problem to get much worse
09:26 Kalonji ok and you found a solution ?
09:36 foobar Kalonji: for the time being... i've disabled self-healing...
09:36 foobar long term solution... migrate to ceph maybe... i'm not happy yet
09:38 Kalonji hum :(
09:44 nshaikh joined #gluster
09:45 ndevos jvandewege: indeed, renaming a volume is not possible with the commandline, you need to unmount all clients, 'gluster volume stop $VOLUME', backup /var/lib/glusterd, rename all the files/dirs related to the volume under /var/lib//glusterd/, and replace all occurences of "volume $OLDNAME" in the files
09:46 ndevos jvandewege: after that, you need to make sure that its done on all storage servers, and you should be able to start the glusterd service again
09:47 ndevos jvandewege: of course, test on an environment that you can break, so that you can try out the procedure before doing it with a real volume :)
09:50 Kalonji Hey ndevos you dont have problem of iowait with gluster ? :)
09:51 ndevos Kalonji: hehe, not really :)
09:52 Kalonji and you use the sel-healing deamon ?
09:53 ndevos Kalonji: yes, and running smallfile creation tests over nfs
09:53 Kalonji hum
09:53 ndevos Kalonji: where is the iowait high, on the client-side, or server-side?
09:54 Kalonji can you send me your conf ? please ? i use gluster with small files too
09:54 ndevos Kalonji: oh, I guess you can get high iowait if you do directory crawls?
09:54 ndevos Kalonji: I did not tune anything, only kept the defaults
09:55 Kalonji -_-
09:55 ndevos well, and its the 3.7-nightly build :D
09:56 jvandewege ndevos: thanks, ofcourse that is done first on a test env :-)
09:56 ndevos jvandewege: not everyone is that smart ;)
09:57 Kalonji time find . | wc
09:57 Kalonji 2521 2521 162956
09:57 Kalonji real 0m25.531s
09:57 Kalonji :s
10:03 glusterbot New news from newglusterbugs: [Bug 1139170] DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139170>
10:13 jvandewege ndevos: hmm looked at it and doesn't seem trivial. Is it possible to stop/remove the volume, create a new volume with the old data but using the correct naming convention? (resyncing 10Tb of data takes some time so if its possible to bypass that then that would be great)
10:16 ndevos jvandewege: I think you could do that, you would need to stop the volume, remove the trusted.gfid and trusted.glusterfs.volume-id xattrs from the directories that function as bricks and create the new volume with the old bricks
10:17 ndevos jvandewege: I've never tried that, but I can not think of anything why it would fail
10:26 hagarth joined #gluster
10:26 ekuric joined #gluster
10:27 edward1 joined #gluster
10:28 raghu` joined #gluster
10:30 harish joined #gluster
10:32 asku joined #gluster
10:32 karnan joined #gluster
10:35 meghanam joined #gluster
10:45 LebedevRI joined #gluster
10:45 vimal joined #gluster
10:46 hagarth joined #gluster
10:48 hybrid512 joined #gluster
10:48 foster joined #gluster
10:55 kanagaraj joined #gluster
10:57 ndarshan joined #gluster
10:58 gildub joined #gluster
11:01 LebedevRI joined #gluster
11:23 ppai joined #gluster
11:23 chirino joined #gluster
11:26 lalatenduM joined #gluster
11:33 SpComb ubuntu 14.04 with glustefs mounting a remote volume, and then a service (dnsmasq) configured to use that mount - the service fails to start up at boot due to "TFTP directory .. inaccessible: No such file or directory", but works fine with a `service dnsmasq restart` after boot
11:33 SpComb the /etc/init.d/dnsmasq sysvinit script has "Required-Start: $network $remote_fs $syslog"
11:34 SpComb does sysvinit Required-Start $remote_fs not do the right thing with the Ubuntu 14.04 mountall/glusterfs/whatever upstart services?
11:35 diegows joined #gluster
11:39 dusmant joined #gluster
11:44 rgustafs joined #gluster
11:49 soumya joined #gluster
11:51 R0ok_ joined #gluster
11:54 meghanam joined #gluster
11:55 qdk joined #gluster
12:03 ghenry joined #gluster
12:03 skippy i think mountall happens before the network is initialized?  Try adding "_netdev" to your Gluster volume mount options in fstab
12:10 plarsen joined #gluster
12:14 Gabou how the gluster servers are synchronized between? is it under tls or something?
12:14 SpComb skippy: I use _netdev, yes, and it mounts fine, that's not the issue - but it tries to start dnsmasq before it has been mounted
12:14 SpComb same thing with apache, but apache just warns that the docroot doesn't exist and it's okay with it later, but dnsmasq fails to start up if the tftp-root is missing
12:15 FenTry Hi, What are hardware requirements (CPU/RAM) for a Glusterfs server used by a ProxMox server ? (approximately)
12:24 hagarth joined #gluster
12:25 B21956 joined #gluster
12:27 giannello joined #gluster
12:28 elico joined #gluster
12:33 glusterbot New news from newglusterbugs: [Bug 1101111] [RFE] Add regression tests for the component geo-replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101111> || [Bug 1131502] Fuse mounting of a tcp,rdma volume with rdma as transport type always mounts as tcp without any fail <https://bugzilla.redhat.co​m/show_bug.cgi?id=1131502>
12:45 skippy I recall seeing that a recommended practice is to build a Gluster volume with multiple bricks per host so that you could later move those bricks to different servers when throughput becomes an issue.
12:45 skippy How does one move a brick from one server to another, though?
12:51 bennyturns joined #gluster
12:54 MickaTri joined #gluster
12:54 ekuric joined #gluster
12:56 LHinson joined #gluster
13:00 ninthBit joined #gluster
13:02 ninthBit How would i manage the .cmd_log_history file?  It seems this file does not have the same log management where it rolls over on size or time.  Is there a way to limit the maximum size of this file?  It becomes an issue if using gluster commands in  a cron job to periodically provide reports and status of the gluster install.
13:04 Guest48639 joined #gluster
13:05 giannello joined #gluster
13:08 julim joined #gluster
13:08 skippy anyone successfully using WORM?  I'm not having any luck with it: https://gist.github.com/skpy/29cf2a4fe334cd2a142b
13:08 glusterbot Title: gluster-worm.md (at gist.github.com)
13:09 julim joined #gluster
13:10 hagarth ninthBit: a workaround would be to move the .cmd_log_history file and restart glusterd
13:10 hagarth ninthBit: what commands do you use for monitoring? volume status and something else?
13:10 necrogami joined #gluster
13:15 recidive joined #gluster
13:17 ninthBit @hagarth: the commands used are volume status, volume rebalance status, peer status, volume heal info, volume heal info split-brain, volume heal info heal-failed, volume status clients, volume status detail
13:18 B21956 joined #gluster
13:25 MickaTri Hi, just a little question, do Gluster integrate a failover feature ? (2 node, and 1 crash)
13:26 jmarley joined #gluster
13:28 tom[] joined #gluster
13:29 MickaTri Or i need to set up Virtual IP to transfer conection ?
13:29 skippy MickaTri: if you're using the FUSE file system to mount, you do not need a VIP
13:29 tom[] my replicated shares are not mounting at boot. but they mount ok manually. i'm not sure how to debug this as there is so much stuff in the logs
13:30 skippy FUSE clients talk to all servers in the volume, MickaTri
13:30 MickaTri ok thx a lot !
13:34 glusterbot New news from newglusterbugs: [Bug 1127653] Memory leaks of xdata on some fops of protocol/server <https://bugzilla.redhat.co​m/show_bug.cgi?id=1127653> || [Bug 1139244] vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139244> || [Bug 1139245] vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process) <https://bugzil
13:34 MickaTri I want to buy 2 server and set up Gluster but i really don't know what kind of hardware do i need ? Any idea of proportion CPU/RAM/Stockage ?
13:34 ira joined #gluster
13:36 kkeithley Buy the biggest and fastest hardware you can afford might be one bit of advice. There are people are running glusterfs on Raspberry Pis, which shows that it works down to very low performing hardware, but that may not be the right thing for you. ;-)
13:36 jobewan joined #gluster
13:37 kkeithley and what's "Stockage"?
13:38 deeville joined #gluster
13:42 ramon_dl joined #gluster
13:44 MickaTri Storage sry ;)
13:45 _Bryan_ joined #gluster
13:46 MickaTri OK thx, your FAQ is funny : "Don’t I need redundant networking, super fast SSD’s, technology from Alpha Centauri delivered by men in black, etc…? -> a very simple cluster can be deployed with two basic servers (2 CPU’s, 4GB of RAM each, 1 Gigabit network)"
13:48 daxatlas joined #gluster
13:50 skippy anyone successfully using WORM?  I'm not having any luck with it: https://gist.github.com/skpy/29cf2a4fe334cd2a142b
13:50 glusterbot Title: gluster-worm.md (at gist.github.com)
13:50 msvbhat ndevos: ping. About the geo-rep patches backport to glusterfs-3.5.x
13:50 coredump joined #gluster
13:50 ndevos msvbhat: pong, what about them?
13:51 msvbhat ndevos: Not sure if they will be beackported to 3.5.3 yet?
13:51 ndevos msvbhat: I do not think it has been done yet, but they surely would be acceptible
13:52 msvbhat ndevos: There are lot of patches need to be backported for that. And people seem to be little busy with 3.6 release.
13:53 msvbhat ndevos: I will talk to Aravinda and see how much effort is to send all of them to release-3.5 branch
13:53 ndevos msvbhat: yes, I understand, it is difficult to put priorities on backports...
13:54 ndevos msvbhat: much appreciated!
13:55 lmickh joined #gluster
13:55 ndevos msvbhat: https://bugzilla.redhat.com/showdependencytree.c​gi?id=1125231&amp;maxdepth=1&amp;hide_resolved=1 is the current list of patches scheduled for 3.5.3, could you add geo-rep bugs to the blocker too?
13:55 glusterbot Title: Dependency tree for Bug 1125231 (at bugzilla.redhat.com)
13:55 ndevos msvbhat: well, put 'glusterfs-3.5.3' in the 'blocks' field of the (cloned) geo-rep bugs
14:00 justyns joined #gluster
14:01 jiku joined #gluster
14:01 justyns joined #gluster
14:02 justyns joined #gluster
14:02 chirino joined #gluster
14:05 vimal joined #gluster
14:09 chirino joined #gluster
14:10 foster joined #gluster
14:14 xleo joined #gluster
14:17 mariusp joined #gluster
14:22 chirino joined #gluster
14:22 wushudoin| joined #gluster
14:24 hagarth joined #gluster
14:25 Guest87821 joined #gluster
14:27 tdasilva joined #gluster
14:32 sputnik13 joined #gluster
14:35 aravindavk joined #gluster
14:42 msvbhat ndevos: Sorry. Was bit occupied.
14:43 ndevos msvbhat: no problem :)
14:44 msvbhat ndevos: Yeah. I will talk to Aravinda once. Will add them to the list if backport is plausible
14:44 julim joined #gluster
14:44 ndevos msvbhat: okay, sounds good to me
14:45 julim joined #gluster
14:46 julim joined #gluster
14:47 julim joined #gluster
14:48 julim joined #gluster
14:50 julim joined #gluster
14:50 julim joined #gluster
14:52 julim joined #gluster
14:53 julim joined #gluster
14:54 julim joined #gluster
14:55 julim joined #gluster
14:56 julim joined #gluster
14:57 julim joined #gluster
14:58 julim joined #gluster
14:58 xleo joined #gluster
14:59 julim joined #gluster
15:00 julim joined #gluster
15:01 sputnik13 joined #gluster
15:01 bene2 joined #gluster
15:01 LHinson1 joined #gluster
15:28 RameshN joined #gluster
15:30 soumya joined #gluster
15:36 Bardack joined #gluster
15:39 sputnik13 joined #gluster
15:45 theron joined #gluster
15:54 semiosis ndevos: jvandewege: ,,(path or prefix)
15:54 glusterbot ndevos: http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
15:55 zerick joined #gluster
15:56 skippy how does one properly move a brick from one server to another?  Say we want to spread bricks across servers for throughput benefits.
15:57 semiosis replace-brick?
15:58 skippy does that copy the data from original brick to new brick?
15:59 semiosis i think so.  try it on a test volume
16:00 skippy i seem to recall reading (in here?) that one strategy for volume creation is to built a vol with multiple bricks per server; so that when throughput demands you can move bricks to more servers, versus adding bricks and rebalancing.
16:00 semiosis yep, i usually recommend that
16:01 skippy then you'd `gluster volume replace-brick` to move the data to the new server?
16:01 skippy is that faster than rebalancing?  It's still shuffling (potentially a lot of) data.
16:02 semiosis that's the idea
16:03 zerick joined #gluster
16:08 daMaestro joined #gluster
16:09 PeterA joined #gluster
16:27 zerick joined #gluster
16:31 Lilian joined #gluster
16:31 hagarth joined #gluster
16:33 skippy semiosis: thanks. trying to test replace-brick.  `gluster volume replace-brick g0 p1.dev:/bricks/brick1/brick p1.dev:/bricks/brick3/brick start` just sits there until I hit CTRL-C
16:34 skippy trying `status` also just sits there, no output, no return of console.  Must CTRL+C
16:35 skippy from a different peer, start and status both report "volume replace-brick: failed: Another transaction is in progress. Please try again after sometime."
16:38 semiosis well, idk
16:39 semiosis check the glusterd log
16:39 semiosis on the server where you started the replace
16:39 skippy interestingly, *all* gluster commands on the server from which I started this are now non-responsive.  `gluster peer status` just sits there.
16:39 ghenry joined #gluster
16:39 ghenry joined #gluster
16:39 skippy which is the glusterd log, actually?
16:40 justyns joined #gluster
16:40 semiosis something like /var/log/glusterfs/etc-glusterfs-glusterd.log
16:40 justyns joined #gluster
16:40 skippy thanks
16:41 semiosis yw
16:41 zerick joined #gluster
16:46 xleo joined #gluster
16:47 skippy https://gist.github.com/skpy/0428812c78833fd7a252  not really sure what I'm seeing in there.  looks like the replace tried to start, but then brick1 was re-attached?
16:47 glusterbot Title: replace-brick.md (at gist.github.com)
16:52 zerick joined #gluster
16:53 LHinson joined #gluster
16:53 semiosis skippy: hmm, maybe it didnt like having source & destination on the same host
16:54 mariusp joined #gluster
16:54 skippy odd
16:54 semiosis in any case, that's not really a test of what you were aiming at
16:54 skippy it's a contrived test, sure.
16:59 LHinson1 joined #gluster
17:04 mariusp joined #gluster
17:05 diegows joined #gluster
17:13 Bardack joined #gluster
17:14 skippy I created a third server, p3.  From p1 I ran `gluster peer probe p3`. p1 shows two peers: p2 and p3.  p2 shows one peer: p1.  p3 shows two peers: p1 and p2, but p2 is marked "Peer Rejected".
17:16 skippy from p2 i ran `gluster peer probe p3.dev`, and now both p2 and p3 show "State: Accepted peer request (Connected)"
17:17 skippy ah - I stopped and started glusterd on p3, now everyone shows "peer in cluster" for each other member.
17:17 skippy Is this par for the course?
17:19 Gabou skippy: you mean, p1 is with p2 and p3, p2 is with p1 and p3 and p3 is with p1 and p2?
17:19 Gabou sure
17:19 skippy sorry, I meant "is it par for the course to have to peer from each host?"
17:20 skippy the "peer rejected" and "accepted peer request" states seemed odd for adding a new server to the cluster
17:40 bennyturns joined #gluster
17:42 ProT-0-TypE joined #gluster
17:43 clyons joined #gluster
17:43 ProT-0-TypE joined #gluster
17:44 semiosis skippy: yep, par for the course
17:44 semiosis i would have sent you to ,,(peer rejected) but you figured it out on your own
17:44 glusterbot I do not know about 'peer rejected', but I do know about these similar topics: 'peer-rejected'
17:44 semiosis ,,(peer-rejected)
17:44 glusterbot http://www.gluster.org/community/documen​tation/index.php/Resolving_Peer_Rejected
17:47 semiosis skippy: oh, peer from each host, not par for the course.  normally only from one (then back again for the hostname) but that should be propagated to all other peers
17:48 semiosis sometimes things go sideways and you end up with peer rejected or some other intermediate state, then you need to restart glusterd (hopefully) to resolve it
17:49 Pupeno joined #gluster
17:50 zerick joined #gluster
17:55 skippy thanks.  now I'm getting "volume replace-brick: failed: Replace brick is already started for volume".  Trying to forcibly abort fails.
17:55 skippy # gluster volume replace-brick g0 p1.dev:/bricks/brick1/brick p1.dev:/bricks/brick3/brick abort force
17:55 skippy Usage: volume replace-brick <VOLNAME> <BRICK> <NEW-BRICK> {start [force]|pause|abort|status|commit [force]}
17:56 mariusp joined #gluster
18:01 semiosis how about forcibly stopping?
18:07 rotbeard joined #gluster
18:11 failshell joined #gluster
18:12 failshell once you set nfs.disable off to enable NFS
18:12 failshell what do you need to do to get gluster to enable its NFS daemon? tried restarting glusterd and glusterfsd init scripts
18:12 failshell but that doesn't work
18:15 failshell that's with 3.5.2
18:25 bala joined #gluster
18:27 semiosis failshell: nfs.disable defaults to off, you dont need to set it unless you previously set it to on.  see also ,,(nfs)
18:27 glusterbot failshell: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
18:28 failshell semiosis: strange then, the nfs process managed by gluster is not running after creating volumes
18:28 semiosis did you start the volumes?
18:28 failshell yup
18:29 failshell semiosis: figured it out, rpcbind wasn't running.
18:29 semiosis a ha!
18:30 failshell we're getting rid of RHS in favor of the open source edition. i have to reverse engineer the entire setup and put it in Chef
18:30 elico failshell: and make it run at startup..
18:30 elico failshell: why?
18:30 failshell why what? we getting rid of RHS?
18:30 elico yes
18:30 failshell its too expensive
18:31 failshell and not flexible enough, because of its price, we end up with a few setups instead of as many as we wanted to
18:31 DV joined #gluster
18:31 elico how much it really costs?
18:31 elico just wondering...?
18:32 elico There was a glusterfs appliance in the past and I have looked for something to use with a gui but never found anything.
18:32 failshell i dont have the exact numbers with me, but its like 15000$/year per pairs
18:33 failshell we prefer to have smaller setups, tuned for a specific workload
18:33 failshell which the OSS edition allows us to do
18:38 failshell we had issues also with quiescing snapshots for backups. after a month and a half on the case with their support, they ended up telling us that was not supported
18:38 failshell why not tell me right away then?
18:43 elico it's really weird
18:48 failshell its a known issue in vmware tools + gluster
18:48 failshell makes the VM crash
18:48 failshell but not every time
18:48 failshell happens to us every couple weeks
18:48 failshell so we disabled quiescing on those hosts
18:51 semiosis you could search bugzilla
18:57 ramon_dl left #gluster
19:06 tdasilva joined #gluster
19:19 altmariusp joined #gluster
19:41 LHinson joined #gluster
19:42 LHinson1 joined #gluster
19:43 coredump joined #gluster
19:47 maxxx2014 joined #gluster
20:02 chirino joined #gluster
20:08 toordog-work Is there a difference in performance or negative positive between a distributed replication horizontal or vertical? *see schema for reference: https://dl.dropboxusercontent.co​m/u/61057651/20140908_160356.jpg
20:09 toordog-work *I know my drawing skills are horrible :P
20:12 sickness look at the difference between raid0+1 and raid1+0, it should translate almost in the same way ;)
20:13 sickness toordog-work: http://www.thegeekstuff.com​/2011/10/raid10-vs-raid01/
20:15 toordog-work thx
20:15 toordog-work been a philosophy problem i failed to resolve so far :)
20:15 toordog-work gonna read that
20:15 oxidane joined #gluster
20:17 sickness eheh
20:18 toordog-work aaah finally that make sense. :)
20:18 toordog-work better always raid 10 :)
20:18 toordog-work hehehe
20:19 toordog-work i couldn't figure out without addin gmore drive to the equation.
20:19 toordog-work 4 drive is exactly the same either 10 or 01.  but with 6+ drive, there is a major difference in resilience.
20:20 toordog-work so better configure the replicate volume first than after distribute volume
20:20 toordog-work order matter
20:21 sickness yeah
20:21 sickness even better to go for raid6 ;)
20:21 sickness ec xlator ;)
20:21 skippy https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_setting_volumes.md  "Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set"
20:21 glusterbot Title: glusterfs/admin_setting_volumes.md at master · gluster/glusterfs · GitHub (at github.com)
20:21 semiosis toordog-work: glusterfs isn't raid
20:22 sickness http://opensource-storage.blogspot.it/​2013/10/is-it-time-to-ditch-raid.html
20:22 glusterbot Title: Open Source Storage: Is it time to ditch RAID? (at opensource-storage.blogspot.it)
20:22 sickness http://opensource-storage.blogspot.it/20​13/10/under-hood-of-disperse-volume.html
20:22 glusterbot Title: Open Source Storage: Under the 'Hood' of the Disperse Volume (at opensource-storage.blogspot.it)
20:22 sickness ;)
20:23 semiosis sickness: great links!
20:23 sickness semiosis: right, this has to be specified, but some replication mechanics and calculation can be understood with the raid "counterparts" with similar replica ratio...
20:23 sickness semiosis: yeah, interesting subjects ;)
20:24 Pupeno joined #gluster
20:28 sauce joined #gluster
20:32 theron joined #gluster
20:32 ThatGraemeGuy joined #gluster
20:38 toordog-work :)
20:39 toordog-work is there a list of known bug/issue on gluster 3.3.1 *resume, not the github issue *
20:40 calum_ joined #gluster
20:42 zerick joined #gluster
20:50 semiosis that's pretty old
20:55 mariusp joined #gluster
21:05 Pupeno joined #gluster
21:06 _zerick_ joined #gluster
21:09 zerick joined #gluster
21:15 failshel_ joined #gluster
21:18 _Bryan_ joined #gluster
21:19 AaronGreen joined #gluster
21:19 Pupeno_ joined #gluster
21:24 failshell joined #gluster
21:24 ProT-0-TypE joined #gluster
21:29 Pupeno joined #gluster
21:46 al joined #gluster
21:49 DV joined #gluster
21:56 toordog-work when you create a distribute replicate volume, do you use the local volume instance of the replicate?
21:57 toordog-work and the volume will be seen by client only on that server?
21:58 toordog-work ex: glusterfs1.local.loc and glusterfs2.local.loc, I have 2 brick on each server that are part of 2 volume replicate *server1/brick1, server2/brick1 = 1 volume replica and so on with brick2
21:58 toordog-work if i create the distribute volume on glusterfs1, client will need to bind to glusterfs1 to mount the distribute volume?
22:03 elico toordog-work: no
22:04 elico it can and will bind the volume and will distribute the load on both servers as needed by referring to one of them at the mount time.
22:05 toordog-work just that i though that the distribute volume will exist only on glusterfs1
22:06 toordog-work by the way, i have this weird error : gluster volume create Dist_Replicat01 transport tcp glusterfs1:/Replicat01/brick glusterfs01:/Replicat02/brick
22:06 toordog-work volume create: Dist_Replicat01: failed: Failed to create brick directory for brick glusterfs1:/Replicat01/brick. Reason : No such file or directory
22:06 toordog-work but i created a directory brick in the mount of Replicat02 and Replicat01
22:06 elico just a sec..
22:07 elico one step at a time please :D
22:07 toordog-work ok
22:07 toordog-work i create my peer, each node has 3 brick, sdb1, sdb2, sdc1
22:07 toordog-work i have 2 nodes
22:08 elico ok
22:08 toordog-work glusterfs1 and glusterfs2
22:08 elico ok
22:08 elico now what FS? each brick?
22:08 toordog-work i create 1 replica 2 with glusterfs1/sdb1 and glusterfs2/sdb1
22:08 toordog-work xfs
22:08 elico ok and how do you mount them?
22:08 cmtime joined #gluster
22:08 toordog-work mount -t glusterfs glusterfs1:/Replicat01 /mnt/Replicat01
22:09 elico no no
22:09 elico on the machine itselfs..
22:09 toordog-work the brick you mean?
22:09 elico like what mount point the sdb1 ?
22:09 elico yes
22:09 toordog-work /dev/sdb1 /exports/sdb1 xfs defaults 0 0
22:09 elico you need to mount the FS into the directory tree... so on what point on the path each brick
22:09 toordog-work so the replicat01 i have to mount it xfs first?
22:10 toordog-work and format it xfs
22:10 elico a sec
22:10 elico on each machine you need to mount the FS on let say /mnt/sdb1 or /replica1
22:10 elico so mount "/dev/sdb1 /replica1"
22:11 elico oops "mount /dev/sdb1 /replica1"
22:11 toordog-work not sure to follow you on this
22:11 elico do you know how to mount a local disk to a local directory?
22:11 elico such as in fstab?
22:11 toordog-work if i mount sdb1 on replica, it will not be a glusterfs replicate volume but a single FS
22:11 toordog-work yes
22:11 elico so you missed the point of glusterfs...
22:12 elico I will try to make it more understandable
22:12 elico are you OK with that?
22:12 toordog-work when i created the replicat volume, i mount the sdb1 in /exports/sdb1 and created a brick directory in it
22:12 toordog-work and I created the replicat volume
22:12 toordog-work now i have a replicat volume that exist inside /sdb1/brick as a gluster volume
22:13 toordog-work so that brick folder has been created on sdb1 partition on a XFS File system
22:13 elico but when you create the volume you need to refer to the full "brick" or "disk" or "mount point" location..
22:13 toordog-work yes
22:14 elico ok and you did not referred to the right path while creating the replica volume
22:14 toordog-work so i did gluster volume create Replicat01 replica 2 glusterfs1:/sdb1/brick glusterfs2:/sdb1/brick
22:14 elico no
22:14 elico you need to put there the full path
22:14 toordog-work i should create teh distribute replica in a single line volume create?
22:14 toordog-work sorry for the /exports/sdb1
22:14 Pupeno joined #gluster
22:14 elico if it's two hosts then yes
22:14 toordog-work i forgot the /exports
22:15 elico it's kind of important
22:15 toordog-work so i did gluster volume create Replicat01 replica 2 glusterfs1:/exports/sdb1/brick glusterfs2:/exports/sdb1/brick
22:16 toordog-work and the same to create Replicat02 but using sdb2
22:16 elico then it should be created
22:16 toordog-work yes i have both Replica volume started and working and mount
22:16 elico so what is the issue now?
22:16 toordog-work I mount the replicat in /mnt/Replicat01 and /mnt/Replicat02
22:17 toordog-work in each i mkdir /mnt/Replicat{01,02}/brick
22:17 toordog-work and i tried to do gluster volume create Dist_Replicat01 transport tcp glusterfs1:/Replicat01/brick glusterfs01:/Replicat02/brick
22:17 toordog-work and i tried to do gluster volume create Dist_Replicat01 transport tcp glusterfs1:/Replicat01/brick glusterfs1:/Replicat02/brick
22:17 elico why do you want to do that?
22:17 elico it's like volume ontop of a volume if I'm right..
22:18 elico there is a right way to do that using one command
22:18 toordog-work ok then it is my mistake
22:19 elico but I'm not sure about what you are trying to do..
22:19 toordog-work i want to have a distribute replica
22:19 toordog-work a,b replicate, c,d replicate adn distribute between (ab) (cd)
22:20 elico The use all these bricks with one line of replica 2 strip 2 with all these 4 bricks on one line...
22:21 toordog-work i'm not sure about strip in this case
22:21 elico distribute + replica right?
22:21 toordog-work I will come back, i need to go home, i will continu from home.
22:21 toordog-work yes
22:21 elico good luck
22:36 cmtim_m joined #gluster
22:43 cmtim_m joined #gluster
22:54 _dist joined #gluster
22:54 _dist afternoon, does anyone have time/knowledge to talk about the libgfapi volume optimization settings?
22:54 cmtime joined #gluster
22:55 _dist for example, I'm finding much higher throughput with no latency hit if I turn io-cache on vs the recommend off
22:59 plarsen joined #gluster
23:07 semiosis _dist: what's "the libgfapi volume optimization settings?"
23:08 semiosis what are you reading?
23:09 _dist virt settings from redhat
23:09 _dist just give me a sec to get the link
23:09 semiosis oh ok
23:11 semiosis _dist: perhaps they advise to disable io-cache not for performance, but for integrity.  maybe there's some failure scenario where having cached data could leave the image in a bad state
23:11 semiosis just guessing here
23:12 _dist this is crazy I can't find it anymore :)
23:12 semiosis been there
23:12 _dist they recommend to disable stat-prefetch, io-cache, quick-read, read-ahead and turn on eager-lock and remote-dio
23:12 semiosis sounds like data integrity stuff
23:13 _dist I know remote-dio is required, but the other features I have no idea what they really do, and it's hard to find good documentation beyond single sentence descriptions
23:13 semiosis stronger data consistency
23:13 _dist not good eonugh :)
23:13 semiosis yeah, i know
23:14 _dist I'm going to assume it has to do with bundling transactions on a local server into larger groups before committing them to all the bricks
23:15 _dist meaning that if that server went down before it got a chance to "commit" those write, they'd be lost
23:15 semiosis afaik, that's the write-behind xlator
23:15 semiosis those you mentioned seem to be read side
23:15 semiosis mostly
23:17 failshell joined #gluster
23:23 _dist the write-behind xlator is basically full sync right? Don't return I'm done until all nodes have the data?
23:27 semiosis http://gluster.org/community/documentation/i​ndex.php/Translators/performance/writebehind
23:27 semiosis that's all i know
23:27 semiosis there may be a bit more in ,,(options)
23:27 glusterbot See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
23:27 semiosis good luck.
23:27 semiosis i'm out for the night
23:27 _dist thanks, I'm playing with the numbers now :)
23:27 _dist have a great night!
23:27 semiosis you too
23:45 sputnik13 joined #gluster
23:54 _dist does anyone know how to remove a gluster volume set? Not set it back to the default (becaues I don't turst all the docs), but to actaully remove the setting

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary