Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:37 Jacob843 joined #gluster
00:43 Jacob843 joined #gluster
00:47 ahino joined #gluster
01:11 derjohn_mobi joined #gluster
01:12 shdeng joined #gluster
01:30 hagarth joined #gluster
01:35 Lee1092 joined #gluster
01:40 harish_ joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:00 poornimag joined #gluster
02:21 caitnop joined #gluster
02:24 Jane_ joined #gluster
02:29 Jane_ I want to have my wordpress website behind a load balancer. I am looking for a way to have all my web servers in sync. (so when I upload a picture to the upload folder of one server, I need it to replicate to the other instantly so the user doesnt get any img not found)..is gluster a good solution for this?
02:40 fcoelho joined #gluster
03:02 Gambit15 joined #gluster
03:05 harish_ joined #gluster
03:15 ZachLanich joined #gluster
04:41 shdeng joined #gluster
04:42 AdStar any takers on, if I can reboot a node if it is healing? will it resume? or whats the best way to stop a heal and restart it..
05:05 Gnomethrower joined #gluster
05:05 pdrakeweb joined #gluster
05:23 side_control joined #gluster
06:25 Javezim @anoopcs Okay we upgraded to 3.8.2 over the weekend and it does appear to be a bit better now
06:27 Javezim @anoopcs Still a tonne more locks that we are used too in 'smbstatus' and we keep seeing this error in the logs but the volume is 100% accessible.
06:27 Javezim [2016-08-15 06:26:25.941264] E [MSGID: 108006] [afr-common.c:4203:afr_notify] 0-gv0mel-replicate-3: All subvolumes are down. Going offline until atleast one of them comes back up.
06:29 msvbhat joined #gluster
06:31 arif-ali joined #gluster
06:36 jtux joined #gluster
06:37 kovshenin joined #gluster
06:42 mhulsman joined #gluster
06:45 jkroon joined #gluster
06:46 derjohn_mobi joined #gluster
06:47 jkroon JoeJulian, following our discussion on friday - i've formed a new theory over the weekend.  in the setup where things "randomly" go belly up the underlying setup is using LVM, so my bricks is an LV on the VGs there, the PVs are all mdraid, raid1 spanning two disks.
06:47 jkroon on one of the servers I replaced a drive yesterday that hasn't failed, but I did pick up that it was spewing crap to dmesg (well, the driver code anyway).
06:49 jkroon the theory is this, glusterfs seems to run with 16 IO threads, for each brick, and obviously each FUSE mount point can only accomodate so many "outstanding" requests, what if those requests somehow gets stuck on that "failing" drive?  in other words, eventually the 16 threads all wait for IO, and more IO gets queued in glusterfs to that brick, to the point where eventually everything just grinds to a halt.
06:49 jkroon after replacing that drive everything does seem better behaved.
06:57 jkroon as things stand now things seems much much better.  a bit disturbing that the md code didn't kick the failing drive or at least report about it via mdadm daemon.
06:58 jkroon anyway, now it's just these "No. of heal failed entries: 7" from volume heal ${volname} statistics that still has me slightly worried.
07:08 msvbhat joined #gluster
07:11 gvandeweyer joined #gluster
07:15 Gnomethrower joined #gluster
07:18 kxseven joined #gluster
07:28 morsik left #gluster
07:40 Lee1092 joined #gluster
07:59 aspandey joined #gluster
08:08 derjohn_mobi joined #gluster
08:10 jkroon joined #gluster
08:25 deniszh joined #gluster
08:39 masber joined #gluster
08:43 fcoelho joined #gluster
08:50 DV__ joined #gluster
09:04 Philambdo joined #gluster
09:06 DV joined #gluster
09:25 Sebbo2 joined #gluster
09:27 Sebbo2 Hey guys, is there a way to mount a GlusterFS volume autom. after a reboot on Ubuntu 15.04 LTS? I'm using a *.vol file to specify my servers and volumes. The auto. mount does not work, due networking and/or GlusterFS-Server isn't online at the time of mount. Well... "mount -a" does work without any problem in a ssh session. :(
09:28 Sebbo2 Is there a way to tell /etc/init/mounting-glusterfs.conf to wait for networking and glsuterfs-server?
09:28 post-factum Sebbo2: whuch gluster version do you use?
09:28 post-factum *which
09:28 Sebbo2 post-factum: 3.5.2-2ubuntu1
09:28 post-factum could you please upgrade?
09:28 post-factum at least, to 3.7.14
09:29 post-factum 3.5 is not supported anymore
09:29 Sebbo2 Only the server or also the client?
09:29 post-factum everything
09:30 post-factum one shouldn't mess around volfiles directly anymore
09:33 Sebbo2 Is there an other repo for Ubuntu 15.04 (vivid)? http://ppa.launchpad.net/gluster/glusterfs-3.8/ubuntu/dists/ does not have any package for this release. Or can I just use "trusty" for example?
09:33 glusterbot Title: Index of /gluster/glusterfs-3.8/ubuntu/dists (at ppa.launchpad.net)
09:35 post-factum kkeithley: ^^
09:38 ndevos I dont think he's awake yet :)
09:39 ndevos but, if it is really not in the ,,(ppa) , then I also would not know
09:39 glusterbot The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
09:40 ndevos we're definitely still looking for volunteers to build the packages when releases are made, in case someone wants to help out with that
09:43 Sebbo2 Damn... I'm going to update to Ubuntu 16.04 LTS ;(
09:45 post-factum ndevos: oh, those non-european timezones
09:51 Sebbo2 Yay... :) Setting up glusterfs-server (3.7.14-ubuntu1~xenial1) ...
09:51 harish_ joined #gluster
09:52 deniszh1 joined #gluster
09:53 deniszh joined #gluster
09:55 deniszh2 joined #gluster
09:58 msvbhat joined #gluster
09:59 derjohn_mob joined #gluster
10:05 deniszh joined #gluster
10:06 Sebbo2 post-factum: It's still not working with the version 3.7. Here is the log file: http://pastebin.com/cPZrrwnK When I connect via SSH and run "mount -a", the storage is mounted without any problem.
10:06 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
10:07 post-factum Sebbo2: your client still runs 3.5.2
10:07 post-factum Sebbo2: but you also have DNS issues
10:08 post-factum DNS resolution failed on host p-v00283.example.com
10:08 post-factum this looks bad
10:10 Sebbo2 post-factum: No, it's on v3.7: https://da.gd/bpEW DNS works very well. I believe, that GlusterFS is only trying to mount the storage even before the volume is online.
10:10 glusterbot Title: #408711 Fedora Project Pastebin (at da.gd)
10:14 Sebbo2 Are my fstab options wrong? https://da.gd/CU3if
10:14 glusterbot Title: #408712 Fedora Project Pastebin (at da.gd)
10:19 siavash joined #gluster
10:22 hackman joined #gluster
10:35 shaunm joined #gluster
10:47 B21956 joined #gluster
11:06 Sebbo2 post-factum: Even when I add the DNS entries in the /etc/hosts file, GlusterFS is not able to resolve it.
11:12 ndevos Sebbo2: you really should not need a manually created .vol file
11:12 ndevos Sebbo2: mounting should work like: mount -t glusterfs <hostname-or-ip>:/<volume> /mnt
11:12 Sebbo2 I've found a similar problem, but I couldn't find the service file for glusterd: https://www.gluster.org/pipermail/gluster-users/2015-November/024330.html
11:12 glusterbot Title: [Gluster-users] [ovirt-users] Centos 7.1 failed to start glusterd after upgrading to ovirt 3.6 (at www.gluster.org)
11:14 Sebbo2 ndevos: I believe, that nobody really understand my problem, or? Mounting it manually works fine by using "mount -a", but when I try to mount it autom. by rebooting the system, it does not work.
11:15 Sebbo2 "GlusterD did start too early. GlusterD is configured to start after network.target. But network.target in systemd only guarantees that the network management stack is up; it doesn't guarantee that the network devices have been configured and are usable (Ref [1]). This means that when GlusterD starts, the network is still not up and hence GlusterD will fail to resolve bricks." https://www.gluster.org/pipermail/gluster-users/2015
11:15 ndevos Sebbo2: well, the way you mount is not really supported anymore (and just happens to work), but you would indeed run into the same problem when you do it correct
11:16 ndevos Sebbo2: the _netdev mount option is not used on Ubuntu, iirc, and it has some other way of doing the mount after glusterd end the network have started
11:16 ndevos @_netdev
11:16 glusterbot ndevos: The mount-option _netdev is checked on RHEL based distributions in /etc/rc.sysinit and /etc/init.d/netfs, in Fedora systemd handles it. Older versions of /sbin/mount.glusterfs warn that it ignores the _netdev option, which is what it should do, so this warning has been silenced with bug 827121 (in glusterfs-3.4).
11:17 ndevos well, no hint on what to use on Ubuntu :-/
11:17 Sebbo2 ndevos: Ok and what's the official supported mount method with more than two servers and different bricks?
11:18 ndevos Sebbo2: in /etc/fstab you would have: <gluster-server>:/<volume> /path/to/mount/point glusterfs _netdev 0 0
11:18 ndevos well, the _netdev can be replaced by defaults on Ubuntu, or put whatever Ubuntu needs there
11:19 Sebbo2 ndevos: Ok, that's mounting a volume of one server. What if this server is down, but some other would also provide this volume? They wouldn't be mounted
11:20 ndevos Sebbo2: right, that is only an issue during the mount process, once mounted the client is smart enough to connect to all the servers
11:21 ndevos Sebbo2: in order to try other servers for mounting, you can use the "mount -o backup-volfile-server=<2nd-server> -t glusterfs ..." notation
11:21 ndevos Sebbo2: check /sbin/mount.glusterfs for the exact option, it's a shell script you you can just 'less' ir
11:21 ndevos *it
11:23 Sebbo2 ndevos: Like that? server01.example.com:/gfsvbackup /mnt/backup glusterfs defaults,_netdev,backupvolfile-server=server02.example.com 0 0
11:24 ndevos Sebbo2: yes, like that, if you checked the backupvolfile-server option in the mount.glusterfs script :)
11:24 ndevos Sebbo2: the "defaults" is now a little redundant in those options though
11:25 Sebbo2 ndevos: And this, if there are some more backup servers? server01.example.com:/gfsvbackup /mnt/backup glusterfs defaults,_netdev,backupvolfile-server=server02.example.com:server03.example.com 0 0
11:25 ndevos and, you need to find the way how Ubuntu handles _netdev, still
11:26 ndevos Sebbo2: I think there is a -server and a -servers option, check the script and try it ;-)
11:27 Sebbo2 ndevos: Here we go: _netdev - this is a network device, mount it after bringing up the network. Only valid with fstype nfs. --- Not useful for GlusterFS ;)
11:27 glusterbot Sebbo2: -'s karma is now -359
11:28 Sebbo2 Karma? What? :o
11:28 ndevos that poor -
11:28 Sebbo2 +++
11:28 glusterbot Sebbo2: +'s karma is now 3
11:28 Sebbo2 fixed karma :P :D
11:28 ndevos Sebbo2: I'm pretty sure Ubuntu has a way to do _netdev like things too, I just dont know it :)
11:30 LinkRage joined #gluster
11:32 shyam joined #gluster
11:33 Sebbo2 Uhm... This is my volume: https://da.gd/N8WZ Unfortunately, it's currently everything on the same server, yes, but anyway: Shouldn't I need to mount "vfsvbackup"? Like this:  p-v00283.example.com:/gfsvbackup /mnt/backup glusterfs defaults,_netdev 0 0
11:33 glusterbot Title: #408737 Fedora Project Pastebin (at da.gd)
11:34 Sebbo2 The sense is to know the volume name and not the brick path, or? :o
11:34 ndevos yes, mounting like p-v00283.example.com:/gfsvbackup looks good to me
11:34 ndevos or, you can use localhost
11:35 ndevos if it fails, you should have a log file like /var/log/gluster/mnt-backup.log
11:37 Sebbo2 Ah, I believe, I understood the logic of the volume file now. :D On the GlusterFS server(s), I need to specify my volume(s) and if the client tries to mount a volume, the client retrieves the bricks from this volume file, or?
11:41 ndevos Sebbo2: when a client mounts, it connects to glusterd on the storage server, glusterd then hands the current .vol file and the client uses that to connect to the bricks
11:41 Sebbo2 Ah, ok. I totally missunderstood the meaning of this file :(
11:44 jkroon joined #gluster
11:45 muneerse2 joined #gluster
11:45 ndevos well, really old versions required you to write+sync the .vol files, it's become a little more user friendly :D
11:47 kkeithley post-factum, Sebbo2: launchpad won't do builds for EOL releases. And packages for EOL releases drop off automatically; I have zero control over that.
11:48 kkeithley @forget ppa
11:48 glusterbot kkeithley: The operation succeeded.
11:48 Sebbo2 kkeithley: Ah, ok. Thanks for this info!
11:49 kkeithley @learn ppa as The official glusterfs packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
11:49 glusterbot kkeithley: The operation succeeded.
11:50 johnmilton joined #gluster
11:50 jkroon does running "gluster volume set gv_home cluster.self-heal-daemon off" actually stop the shd processes?
11:51 Sebbo2 ndevos: How has the vol file needs to look like? Mine looks like that and does not work: https://da.gd/iP0d Reason at mounting: 0-mgmt: failed to fetch volume file (key:/gfsvbackup)
11:51 glusterbot Title: #408745 Fedora Project Pastebin (at da.gd)
11:52 derjohn_mob joined #gluster
11:53 jkroon after i ran that the following processes still running on each server (slight variances):  /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/a12a3bf0f93699117b55cc8ef1ea30ec.socket --xlator-option *replicate*.node-uuid=b139ab6c-890c-4134-a882-c5e1454a69e5
11:53 skoduri joined #gluster
11:53 jkroon can I safely kill those processes?
11:54 kkeithley and I'd be using one of the LTS releases: Trusty (14.04) or Xenial (16.04), not the short lifetime releases live vivid and wiley
11:55 kkeithley s/live/like/
11:55 glusterbot kkeithley: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
11:55 jkroon and Im seeing a brick process consuming like 1000% (10 cores) worth of CPU sporadically.
11:55 jkroon which is causing some problems.
11:56 kkeithley and I'd be using one of the LTS releases: Trusty (14.04) or Xenial (16.04), not the short lifetime releases like vivid and wiley
11:56 kkeithley glusterbot--
11:56 glusterbot kkeithley: glusterbot's karma is now 10
11:56 Sebbo2 kkeithley: Yeah, usually, we're doing this, but some servers were updated to a wrong version by someone. I've already updated it to 16.04 LTS
11:59 jkroon @ that time the brick processes on other servers are using "normal" amounts of CPU (50-150% as measured by top)
11:59 kkeithley @forget ppa
11:59 glusterbot kkeithley: The operation succeeded.
12:00 kkeithley learn ppa as  The official glusterfs packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
12:00 glusterbot Title: glusterfs-3.6 : “Gluster” team (at goo.gl)
12:00 kkeithley @learn ppa as  The official glusterfs packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
12:00 glusterbot kkeithley: The operation succeeded.
12:00 kujeger joined #gluster
12:01 Sebbo2 Are there any examples for the volume file?
12:02 AdStar Is there away to pause a heal operation?
12:02 jkroon AdStar, not that I'm aware of.  what are you seeing?  why do you need to pause it?
12:03 AdStar I have a 2 node replication.. my node1 ctdb/samba crashed and won't restart. but it's healing with node2. I need to reboot it. :(
12:03 AdStar to bring shares back online on node1
12:04 jkroon which way is the heal ... node2 => node1?
12:04 AdStar It's replication 41TB and only done about 18TB.. so it's going to take days... node1 => node2
12:04 AdStar node2 was blank..
12:04 jkroon what exactly is not starting?
12:05 AdStar ctdb  init event times out after 30seconds. but its tdb is about 4GB so I can't get it to start...
12:05 jkroon perhaps we can rather get that sorted out for you rather than trying to interrupt the heal ... (a reboot should just resume the heal after but I wouldn't risk it if I can help it)
12:05 jkroon ctdb?
12:06 wadeholler joined #gluster
12:06 AdStar if I could figure out how to do a manual vacum of the  smbXsrv_open_global.tdb.0 in /var/lib/ctdb/ then it "should" start
12:09 jkroon https://ftp.samba.org/pub/unpacked/ctdb/doc/ctdb.1.html - if I understand taht correctly a simple "ctdb vacuum" should do the trick.
12:09 glusterbot Title: ctdb (at ftp.samba.org)
12:10 AdStar yeah I know but ctdb won't start to run that. anyawys I was hoping I could pause the heal. I've been down for 2 days now, thins were working really well, but the /var/log/glsuterfs/bricks/ log file filled up my disk
12:11 AdStar lots of these
12:11 AdStar [2016-08-14 23:17:57.247040] W [dict.c:1282:dict_foreach_match] (-->/lib64/libglusterfs.so.0(dict_foreach_match+0x5c) [0x7f7b7caa9c1c] —>/usr/lib64/glusterfs/3.7.11/xlator/features/index.so(+0x3980) [0x7f7b6cc64980] —>/lib64/libglusterfs.so.0(dict_foreach_match+0xe3) [0x7f7b7caa9ca3] ) 0-dict: dict|match|action is NULL [Invalid argument]
12:11 glusterbot AdStar: ('s karma is now -148
12:11 AdStar I'm not sure if that is an issue or not :(
12:11 jkroon i'm not the right person to answer that question.
12:12 jkroon what's the error you get when running ctdb vacuum?
12:12 AdStar ctdb client socket is loaded... ctdb needs to be running to run ctdb vacuum... but I can't get it to run :(
12:12 jkroon oh, daemon is dead.  ps axf | grep ctdb?
12:13 AdStar yeah not running checked that it's ok I'm just going to live with the downtime unless I can pause the heal :D
12:14 AdStar but the log entry I wouldn't mind being able to sort out..
12:14 Sebbo2 lol. Fixed the mounting problem with the "failed to fetch volume file" error by allowing insecure connections: gluster volume set <VOLUME-NAME> server.allow-insecure on
12:14 B21956 joined #gluster
12:18 side_control joined #gluster
12:22 wadeholler joined #gluster
12:27 jkroon AdStar, you should be able to just reboot, the heal should just continue again when the brick comes back.
12:27 plarsen joined #gluster
12:27 jkroon performance.cache-size - can anyone explain how this config option works?  By default it seems to have two values, 32MB and 128MB (gluster volume get ${volname} all | grep cache-size)
12:27 hchiramm joined #gluster
12:28 unclemarc joined #gluster
12:40 wadeholler joined #gluster
12:41 wadeholler joined #gluster
12:44 julim joined #gluster
13:04 rouven joined #gluster
13:04 julim joined #gluster
13:07 luis_silva joined #gluster
13:09 MessedUpHare joined #gluster
13:15 jkroon i'm seeing serious CPU spikes for glusterfs ... as in CPU going to 1500% on a machine with 24 cores (ie, theoretically we can go to 2400%).  IO threads is at 16, so I'm assuming all the threads are spinning out of control.
13:15 jkroon or that we're simply not serving fast enough.
13:16 jkroon even so I'm not seeing crazy high IO load using iostat on the underlying block devices.
13:21 harish_ joined #gluster
13:25 nishanth joined #gluster
13:41 jkroon ok, so i've got new weirdness, gluster volume heal ${volname} info split-brain lists a file as split, but then heal .. latest-mtime states failure:  File not in split-brain.
13:41 wadeholler joined #gluster
13:46 hagarth joined #gluster
13:49 miticojo joined #gluster
13:51 bowhunter joined #gluster
14:04 kujeger left #gluster
14:11 jkroon i've got processes stuck on FUSE with calls to fuse_flush+0xfa/0x130 in the kernel, the fd being passed is that of an apache log file to sys_close() ... which I take it implies the kernel is executing a flush() on files when they get closed?  why would a flush() on gluster not complete?
14:31 ZachLanich joined #gluster
14:36 shyam joined #gluster
14:37 wushudoin joined #gluster
14:40 hagarth joined #gluster
14:47 Sebbo2 I've created a startup script for mounting autom. GlusterFS entries in /etc/fstab. I'm not sure, why the LSBInitScript isn't working with the same code, but /etc/rc.local does the job. The option "_netdev" isn't supported anymore (yet) in Ubuntu 16.04 LTS and newer and due of that, you've to find an own solution to mount shared network storage like NFS, CIFS and GlusterFS storage.
14:47 jkroon Sebbo2, localmount typically runs before networking.
14:47 jkroon gluster requires networking.
14:48 jkroon so netdev really is the way to go ...
14:48 Sebbo2 Can I provide this script the GlusterFS devs anyhow, that they can implement it - with maybe a better solution? :D
14:48 Sebbo2 jkroon:  netdev isn't supported anymore in Ubuntu 16.04 LTS.
14:49 Sebbo2 jkroon: Yes, I know and due of that, I've configured the LSBInitScript to only mount, if the network and a few more things are up
14:49 Sebbo2 But it didn't worked...
14:50 jkroon Sebbo2, i'm neither an ubuntu nor glusterfs dev.  just a concerned linux user - if ubuntu nuked that support, and haven't replaced it with anonther mechanism I'm soon going to start getting some angry people knocking my door down to "fix" the situation.
14:50 jkroon (some of my clients are ubuntu users)
14:51 Sebbo2 jkroon: I was looking for a solution for more than 6 hours today and my mentioned script was now the only working solution... ;(
14:52 Sebbo2 Let me just test one more thing
14:54 jkroon http://unix.stackexchange.com/questions/169697/how-does-netdev-mount-option-in-etc-fstab-work
14:54 glusterbot Title: systemd - How does _netdev mount option in /etc/fstab work? - Unix & Linux Stack Exchange (at unix.stackexchange.com)
14:54 jkroon you're right.
14:54 jkroon need a per-fs dependent mechanism now :(
14:54 jkroon no longer a netmount that mounts everything marked with _netdev it would seem
14:56 Sebbo2 My last test wasn't successful. I've tried to get it working by using the option "vers=3" (which is usually used for NFSv3), but it didn't work as I already thought: http://askubuntu.com/questions/763498/systemd-seems-to-ignore-netdev-option-for-nfs-in-ubuntu-16-04
14:56 glusterbot Title: mount - systemd seems to ignore _netdev option for NFS in Ubuntu 16.04 - Ask Ubuntu (at askubuntu.com)
14:57 jkroon no, it seems they now always try and mount everything, but obviously the gluster one will fail, net there is a fallback init script that does a mount -a -t nfs ... type of thing.  Perhaps you can just add gluster to that list?
15:00 Sebbo2 You mean instead of mounting as filesystem "glusterfs", I should mount it as "nfs"?
15:08 Sebbo2 Nope, does also not work. Even with the vers=3 option
15:09 johnmilton joined #gluster
15:12 jbrooks joined #gluster
15:14 johnmilton joined #gluster
15:20 MessedUpHare joined #gluster
15:20 bkolden joined #gluster
15:23 msvbhat joined #gluster
15:34 nathwill joined #gluster
15:46 kpease joined #gluster
16:05 plarsen joined #gluster
16:14 luis_silva joined #gluster
16:28 luis_silva left #gluster
16:30 rwheeler joined #gluster
16:34 jiffin joined #gluster
16:36 bkolden joined #gluster
16:39 sputnik13 joined #gluster
16:48 jiffin1 joined #gluster
16:51 jiffin joined #gluster
16:52 jiffin2 joined #gluster
16:59 jiffin1 joined #gluster
17:03 jiffin1 joined #gluster
17:06 jiffin1 joined #gluster
17:13 jiffin1 joined #gluster
17:18 jiffin1 joined #gluster
17:24 jiffin joined #gluster
17:31 shyam joined #gluster
17:48 jiffin joined #gluster
17:48 bkolden joined #gluster
17:51 siavash joined #gluster
17:54 [diablo] joined #gluster
17:59 cliluw joined #gluster
18:10 bluenemo joined #gluster
18:17 wushudoin joined #gluster
18:20 bowhunter joined #gluster
18:22 jiffin joined #gluster
18:24 kpease joined #gluster
18:27 jiffin1 joined #gluster
18:31 kpease joined #gluster
18:36 kpease joined #gluster
18:41 jiffin1 joined #gluster
18:42 kpease joined #gluster
18:47 kpease joined #gluster
18:47 jiffin joined #gluster
18:51 nathwill joined #gluster
18:51 jiffin1 joined #gluster
18:52 nathwill joined #gluster
18:58 jiffin1 joined #gluster
19:06 dnunez joined #gluster
19:17 rouven joined #gluster
19:20 johnmilton joined #gluster
19:40 siavash joined #gluster
19:43 edong23 joined #gluster
20:02 nathwill joined #gluster
20:12 Hamburglr joined #gluster
20:13 Hamburglr joined #gluster
20:16 robb_nl joined #gluster
20:23 siavash joined #gluster
20:23 blu_ joined #gluster
20:34 rwheeler joined #gluster
20:57 nathwill joined #gluster
21:13 Andrew___ joined #gluster
21:15 Andrew___ Hello. Very quick question from a complete Gluster newbie: Is there anyway to do a dual-master asynchronous replication with Gluster? I need master-master Geo-replication.
21:15 JoeJulian not yet
21:15 Andrew___ Is that a planned feature?
21:15 JoeJulian It's been talked about. I wouldn't expect it within the next 6 months though.
21:16 JoeJulian My personal non-developer opinion.
21:16 JoeJulian But I do follow the development pretty closely. It would require a different replication model that's not in production yet.
21:19 Andrew___ I appreciate the quick response. That's exactly what I needed to know. Thanks!
21:31 Alghost_ joined #gluster
21:57 Wizek_ joined #gluster
22:53 plarsen joined #gluster
23:32 bkolden joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary