Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 gzcwnk joined #gluster
00:01 haomaiwa_ joined #gluster
00:02 zhangjn joined #gluster
00:07 hgichon joined #gluster
00:07 gildub_ joined #gluster
00:19 JoeJulian mjrosenb: It does not for hash allocation, just for migration. By default, if the target brick is more full than the source brick, the file will not be moved.
00:35 mlhamburg joined #gluster
00:37 ctria joined #gluster
00:55 plarsen joined #gluster
00:57 ctria joined #gluster
01:01 haomaiwa_ joined #gluster
01:07 daMaestro joined #gluster
01:29 Mr_Psmith joined #gluster
01:34 VeggieMeat joined #gluster
01:36 lezo joined #gluster
01:37 jotun joined #gluster
01:37 mlncn joined #gluster
01:37 yawkat joined #gluster
01:37 sc0 joined #gluster
01:37 JPaul joined #gluster
01:38 rideh joined #gluster
01:40 virusuy joined #gluster
01:41 fyxim joined #gluster
01:41 sadbox joined #gluster
01:41 frankS2 joined #gluster
01:43 lh_ joined #gluster
01:48 jermudgeon joined #gluster
01:50 Lee1092 joined #gluster
01:57 m0zes joined #gluster
02:03 DV_ joined #gluster
02:08 gzcwnk anyone in
02:08 gzcwnk ?
02:08 B21956 joined #gluster
02:22 nangthang joined #gluster
02:24 m0zes joined #gluster
02:24 gem joined #gluster
02:45 haomaiwa_ joined #gluster
02:49 6A4AA4JLE joined #gluster
03:02 haomaiwa_ joined #gluster
03:08 kdhananjay joined #gluster
03:13 sakshi joined #gluster
03:29 JoeJulian gzcwnk: yes
03:30 JoeJulian If you'd asked the question you actually wanted answered, I would have answered it instead.
03:51 haomaiwa_ joined #gluster
03:52 bharata-rao joined #gluster
03:55 bkunal joined #gluster
04:01 haomaiwa_ joined #gluster
04:06 rafi joined #gluster
04:07 RameshN joined #gluster
04:07 Manikandan joined #gluster
04:08 Manikandan joined #gluster
04:13 aravindavk joined #gluster
04:20 vimal joined #gluster
04:22 nbalacha joined #gluster
04:27 [7] joined #gluster
04:28 Manikandan_wfh joined #gluster
04:29 ramteid joined #gluster
04:31 itisravi joined #gluster
04:33 itisravi joined #gluster
04:39 Bhaskarakiran joined #gluster
04:43 kanagaraj joined #gluster
04:48 AleksU joined #gluster
04:50 vmallika joined #gluster
04:51 hchiramm_home joined #gluster
04:55 overclk joined #gluster
04:57 itisravi joined #gluster
04:58 itisravi joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 pppp joined #gluster
05:09 ppai joined #gluster
05:12 harish joined #gluster
05:13 shubhendu joined #gluster
05:16 kanagaraj joined #gluster
05:22 kanagaraj joined #gluster
05:22 karnan joined #gluster
05:28 RameshN joined #gluster
05:29 atinm joined #gluster
05:31 R0ok_ joined #gluster
05:34 ndarshan joined #gluster
05:47 kdhananjay joined #gluster
05:50 nishanth joined #gluster
05:53 zhangjn joined #gluster
05:54 zhangjn joined #gluster
05:57 Saravana_ joined #gluster
06:00 hgowtham joined #gluster
06:01 hgowtham_ joined #gluster
06:01 haomaiwa_ joined #gluster
06:19 Bhaskarakiran joined #gluster
06:30 rideh joined #gluster
06:30 Manikandan joined #gluster
06:35 sakshi joined #gluster
06:42 kovshenin joined #gluster
06:44 pppp joined #gluster
06:47 sakshi joined #gluster
06:48 atalur joined #gluster
06:52 sripathi1 joined #gluster
06:57 hchiramm joined #gluster
06:57 arcolife joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 jiffin joined #gluster
07:03 anil joined #gluster
07:04 sakshi joined #gluster
07:06 dusmant joined #gluster
07:10 eryc_ joined #gluster
07:10 sripathi2 joined #gluster
07:10 leucos_ joined #gluster
07:11 delhage_ joined #gluster
07:11 siel_ joined #gluster
07:14 skoduri joined #gluster
07:14 sankarsh` joined #gluster
07:15 rich0dify_ joined #gluster
07:16 pppp joined #gluster
07:16 jbrooks joined #gluster
07:16 spalai joined #gluster
07:18 obnox joined #gluster
07:19 jermudgeon joined #gluster
07:21 frankS2 joined #gluster
07:22 LebedevRI joined #gluster
07:25 jtux joined #gluster
07:39 mhulsman joined #gluster
07:47 mhulsman joined #gluster
07:52 DRoBeR joined #gluster
07:59 ramky joined #gluster
07:59 Philambdo joined #gluster
08:01 haomaiwa_ joined #gluster
08:01 mlhamburg1 joined #gluster
08:08 nbalacha joined #gluster
08:27 kdhananjay joined #gluster
08:28 fsimonce joined #gluster
08:29 ivan_rossi joined #gluster
08:30 lh joined #gluster
08:35 sripathi joined #gluster
08:38 hos7ein joined #gluster
08:39 glafouille joined #gluster
08:49 dusmant joined #gluster
08:56 Manikandan joined #gluster
08:57 nbalacha joined #gluster
08:57 Manikandan joined #gluster
08:59 mlncn joined #gluster
09:00 kovshenin joined #gluster
09:01 haomaiwa_ joined #gluster
09:05 kdhananjay joined #gluster
09:07 gildub_ joined #gluster
09:11 kanagaraj joined #gluster
09:11 muneerse joined #gluster
09:23 Slashman joined #gluster
09:30 dan__ joined #gluster
09:54 kanagaraj joined #gluster
10:01 haomaiwa_ joined #gluster
10:03 aravindavk joined #gluster
10:08 RedW joined #gluster
10:11 dusmant joined #gluster
10:13 deepakcs joined #gluster
10:17 gildub_ joined #gluster
10:20 hgowtham_ joined #gluster
10:21 mhulsman joined #gluster
10:21 frozengeek joined #gluster
10:25 kovshenin joined #gluster
10:27 glafouille joined #gluster
10:43 jwd joined #gluster
10:44 hgowtham_ joined #gluster
11:01 haomaiwa_ joined #gluster
11:02 kovshenin joined #gluster
11:09 jwaibel joined #gluster
11:10 timotheus1 joined #gluster
11:12 gildub_ joined #gluster
11:16 jwd joined #gluster
11:27 jrm16020 joined #gluster
11:41 ppai joined #gluster
11:57 glafouille joined #gluster
12:01 haomaiwang joined #gluster
12:04 rafi1 joined #gluster
12:12 rafi joined #gluster
12:22 kdhananjay joined #gluster
12:28 shubhendu joined #gluster
12:33 vmallika joined #gluster
12:34 plarsen joined #gluster
12:37 ira joined #gluster
12:41 steveeJ is the formula on the disperse feature site correct with (N - R) * C?
12:42 steveeJ as an example, this would yield one usable drive out of three
12:42 steveeJ when 2 replicas are configured
12:43 xavih steveeJ: the formula is correct, but your example is not. N must always be >2*R for a disperse volume
12:44 xavih steveeJ: and saying they are replicas is misleading as each bricks contains different data (they are not exact copies like in a replica)
12:45 steveeJ xavih: can I think of R as replicas in a logical way?
12:46 xavih steveeJ: R is the redundancy. It's the number of bricks that you can lose without losing data
12:46 Mr_Psmith joined #gluster
12:46 steveeJ the implementation specifics are not too important for me at this point. I have 3 nodes with 1 brick each, and I'd like to create a volume so that 1 node might fail
12:47 steveeJ oh, that makes more sense!
12:47 xavih steveeJ: then you can create a disperse volume with 3 bricks and one of redundancy
12:47 steveeJ so my example would really be (3 -1) * C
12:47 xavih steveeJ: yes
12:47 steveeJ then the disperse translator is definitely what I was looking for
12:48 steveeJ xavih: at this point, thank you for implementing it :-)
12:48 xavih steveeJ: yw :)
12:49 steveeJ can I start such a volume with just 1 node and add nodes later?
12:49 steveeJ I have 0 experience using gluster, I'm just reading the docs
12:49 steveeJ starting with R=0, if that makes sense
12:52 rjoseph joined #gluster
12:53 bluenemo joined #gluster
12:53 xavih steveeJ: current implementation of disperse does not allow to change parameters once created
12:53 xavih steveeJ: you can add more bricks, but always in multiples of N
12:54 xavih steveeJ: if you start with a N=3,R=1, you can add more space by adding 3 more bricks
12:54 xavih steveeJ: you cannot change R once the volume is created
12:54 nishanth joined #gluster
12:54 bluenemo hi guys. I'll put my gluster setup into production tomorrow morning :) I've got four apache workers now, two in each amazon availability zone. with that, in each AZ there is one gluster node, both replicating and sharing files out via NFS. I've scaled the apache workers to 2CPUs and 8GB RAM each. What do you think I should give the gluster servers? do they need more cpu or more ram? Do you think 2CPUs / 8GB RAM is a good idea here too?
12:55 steveeJ xavih: I see. I'm still contemplating whether to use ceph or gluster and comparing features
13:01 haomaiwa_ joined #gluster
13:01 rafi1 joined #gluster
13:02 poornimag joined #gluster
13:06 julim joined #gluster
13:11 dusmant joined #gluster
13:18 rjoseph joined #gluster
13:25 DRoBeR Hello, folks! Don't know if I researched properly or not... but... Is there any chance that I create a volume in distributed mode in just 1 brick and later set it to replicate when I add a second, third, etc bricks?? Without needing to create a second volume and move the data to it, of course. :)
13:27 chirino joined #gluster
13:27 B21956 joined #gluster
13:29 bluenemo whats the best practice way to restart gluster nodes again?
13:29 bluenemo service gluster stop and reboot? or should I gluster peer remove node first?
13:29 Boemlauw joined #gluster
13:30 jiffin DRoBeR: yes it should work properly
13:30 autostatic joined #gluster
13:33 autostatic I'm running a GlusterFS cluster v. 3.4.2 on Ubuntu 14.04 and the logs get flooded with the following lines while everything seems to run fine otherwise:
13:33 autostatic E [marker.c:2140:marker_removexattr_cbk] 0-datavolume-marker: Numerical result out of range occurred while creating symlinks
13:33 autostatic E [marker.c:2140:marker_removexattr_cbk] 0-datavolume-marker: Numerical result out of range occurred while creating symlinks
13:33 autostatic Oops, I meant
13:34 autostatic E [marker.c:2140:marker_removexattr_cbk] 0-datavolume-marker: No data available occurred while creating symlinks
13:34 autostatic For the second line. Anybody an idea? Looks a lot like this: https://www.mail-archive.com/gluster-users@gluster.org/msg18749.html
13:35 glusterbot Title: [Gluster-users] gluster 3.6.2 "no data available" error - selinux? (at www.mail-archive.com)
13:35 spalai left #gluster
13:36 Boemlauw Hi guys, hope I can just barge in and ask a question. We are a bit lost here.
13:36 Boemlauw We have the problem that a glusterfs fusemount process crashes all the time when we access a certain  directory within this filesystem. How can I troubleshoot this problem?
13:37 Boemlauw ClusterFS v3.6.1
13:38 unclemarc joined #gluster
13:42 kovshenin joined #gluster
13:43 DV joined #gluster
13:45 mlncn joined #gluster
13:45 rjoseph joined #gluster
13:55 Pintomatic joined #gluster
13:56 Lee1092 joined #gluster
14:01 DV joined #gluster
14:14 arcolife joined #gluster
14:17 haomaiwa_ joined #gluster
14:18 jmarley joined #gluster
14:23 vimal joined #gluster
14:24 aravindavk joined #gluster
14:24 dgandhi joined #gluster
14:26 dgandhi joined #gluster
14:27 dgandhi joined #gluster
14:29 dgandhi joined #gluster
14:30 dgandhi joined #gluster
14:31 dgandhi joined #gluster
14:33 dgandhi joined #gluster
14:35 dgandhi joined #gluster
14:37 dgandhi joined #gluster
14:38 dgandhi joined #gluster
14:39 dgandhi joined #gluster
14:40 shyam joined #gluster
14:41 hamiller joined #gluster
14:43 RedW joined #gluster
14:44 steveeJ xavih: would N >= 2*R work too?
14:45 steveeJ stressing the equality here, so I could start with N=2,R=1
14:52 ira joined #gluster
14:55 dusmant joined #gluster
14:56 Boemlauw DMRT4ever
14:59 julim joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 atinm joined #gluster
15:03 lpabon joined #gluster
15:17 skylar joined #gluster
15:18 frozengeek joined #gluster
15:19 skylar1 joined #gluster
15:23 bennyturns joined #gluster
15:31 Trefex joined #gluster
15:35 jmarley joined #gluster
15:47 maserati joined #gluster
15:48 gem joined #gluster
15:48 sankarshan joined #gluster
15:49 bowhunter joined #gluster
15:51 jiffin joined #gluster
15:53 Merlin_ joined #gluster
15:56 morse joined #gluster
16:00 klaxa joined #gluster
16:01 haomaiwa_ joined #gluster
16:06 mlncn joined #gluster
16:09 cholcombe joined #gluster
16:11 JoeJulian DRoBeR: Yes, you can add bricks to the volume to increase replication and/or capacity. See https://joejulian.name/blog/glusterfs-replication-dos-and-donts/ about adding replicas.
16:11 glusterbot Title: GlusterFS replication do's and don'ts (at joejulian.name)
16:13 DRoBeR JoeJulian, thank you, mate. I finally got a way myself. I'll take a look at it anyway since you took your time to get it. Thanks again!
16:13 JoeJulian bluenemo: Assuming your distro doesn't do something silly like stop the network before it kills processes, a reboot should be fine. If it does do those things, killing glusterfsd is desired in order to properly close the tcp connections.
16:14 bluenemo JoeJulian, ah hi :) Yeah i did a servcie gluster stop, which didnt stop everything, but after a reboot everything just came back on fine. giving it some mount options for waiting for mountability on startup helped too
16:15 JoeJulian autostatic: marker (back in 3.4) is only used for geo-replication. If you're not doing that, turn off marker: gluster volume set $vol geo-replication.indexing off
16:18 shyam left #gluster
16:20 JoeJulian steveeJ: When you create a disperse volume, you define it to span a certain number of bricks. You can then add capacity to that disburse volume by adding bricks in multiple of that (if I'm understanding correctly) will create a distribute-disburse volume.
16:23 JoeJulian lpabon has just announced heketi, a RESTful management interface for GlusterFS: https://github.com/heketi/heketi
16:23 glusterbot Title: heketi/heketi · GitHub (at github.com)
16:24 JoeJulian Now I need to figure out if this helps my project or not... :D
16:27 Danishman joined #gluster
16:31 mikemol JoeJulian: If I create a dispersion volume with 3 bricks, replica 2, that means I have two bricks' worth of parity for every brick of data, right?
16:31 mikemol If I then add three more bricks, do I maintain replica 2, or does doubling my number of bricks result in replica 4?
16:32 mikemol I'd want it to remain replica 2, but it'd depend on how the xlators are layered, and I don't need surprises...
16:32 JoeJulian You maintain /redundancy/ 2.
16:32 mikemol Right, what I meant.
16:33 mikemol (It's rather difficult to keep the terminology straight for all of gluster's different modes of operation...)
16:33 JoeJulian And, in fact, the redundancy on a disperse volume cannot change ever.
16:33 JoeJulian yeah, I hear ya. I just specified to ensure clarity.
16:34 steveeJ the big gotcha about the disperse volumes is that R represents the number of nodes that may fail
16:34 * mikemol realizes his creation of a 10-brick, 2-redundancy volume last week was probably a mistake.
16:34 steveeJ it is *not* the replica count
16:34 mikemol I should have created a 3-brick, 2-redundancy volume and added either six or nine bricks to it.
16:34 * mikemol nods
16:34 _feller joined #gluster
16:35 mikemol I think of it as akin to RAID, so my 2R volume is supposed to behave as RAID6.
16:35 JoeJulian You can see how it works by creating a volume and looking at the fuse vol file under /var/lib/glusterd/vols for that volume.
16:36 * mikemol ponders.
16:37 mikemol I should be able to create a new volume using a subset of the same bricks, do the expansion, and move the data from one volume to another within the same cluster.
16:38 JoeJulian seems reasonable
16:39 pppp joined #gluster
16:39 pdrakeweb joined #gluster
16:40 * mikemol giggles
16:40 pdrakeweb joined #gluster
16:41 steveeJ mikemol: the bricks have to be removed from the first volume first, right?
16:41 mikemol I just imagined a SATA disk enclosure that exposes the drive inside as an individual gluster brick.
16:42 mikemol steveeJ: No; you can have a single brick participate in multiple volumes.
16:42 mikemol You simply specify a different path for the data to be stored when you add the brick to another volume.
16:43 mikemol The procedure I described is effectively an in-place restriping with a bunch of network chatter. Only problem is it's not transparent, as your volume name changes, and you don't want to do it while using the data...
16:44 JoeJulian If your brick's host filesystem is mounted at /mnt/sda (for instance) you would typically create a brick at /mnt/sda/brick1. You can also have a brick for a different volume at /mnt/sda/brick2.
16:44 * mikemol nods
16:45 mikemol Here, we tend to use /var/gluster/brick/volname.
16:45 mikemol But something under /mnt might work better. Occasionally hit path-too-long issues in a remote mount of one of our mailbox servers.
16:46 JoeJulian I use /srv/gluster/volume/devname, ie. /srv/gluster/cinder/sda
16:46 steveeJ JoeJulian: wouldn't that effectively create two bricks? or is there such a thing as a path within a brick that can be configured?
16:47 JoeJulian steveeJ: it does create two bricks that share a filesystem. This can be confusing when one of your volumes doesn't show the free space you expect because the other volume is using it, but for that migration purpose, it should be fine.
16:47 steveeJ that's totally fine!
16:49 mikemol I'm also considering using it where I have layered redundancy. I've got a scenario where I have a replica-2 gluster volume holding VM images, and those VM images are themselves dispersion volume bricks for another application.
16:49 JoeJulian When I started here at IO, they had done that. Cinder, glance, and nova all had bricks on the same filesystems with replication defined differently for each volume. It was a mess as some filesystems filled up much faster than others.
16:50 mikemol I don't need replica-2 underneath bricks in a redunancy-2 setup, so I'm thinking about creating a simple replica-1 volume on top of the same bricks for data with other redundancy options.
16:50 steveeJ can I really only add multiplies of N to a dispersion volume?
16:51 JoeJulian If I was (and I do) going to use the same drives to store multiple volumes in an ongoing basis, I use lvm to carve out bricks. If I need to resize a volume, I can just add extents to the bricks that need to grow. This allows predictable use while allowing some ability to change dynamically.
16:51 JoeJulian Yes, steveeJ, multiples of N.
16:52 steveeJ I should read about the math behind erasure coding
16:52 JoeJulian You can work on your post-doc while you're at it. ;)
16:52 ToMiles joined #gluster
16:53 ToMiles any tips for upgrade  3.7.2 ->  3.7.6 on ubuntu
16:54 ToMiles upgradedone brick and it's "Peer rejected" now
16:55 JoeJulian I look in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log to see why it's rejecting.
16:58 steveeJ is it also possible to remove bricks from a disperse volume?
16:59 ToMiles Version of Cksums home differ
17:00 ToMiles did try Resolving_Peer_Rejected procedure thinking that would fix it
17:00 JoeJulian steveeJ: in those same multiples, my understanding is yes.
17:01 7GHABNWZP joined #gluster
17:01 mikemol What's the process for replacing a brick? Does it have to be done in multiples of N?
17:02 JoeJulian ToMiles: if the cksums differ, I usually rsync /var/lib/glusterd/vols from one server to the other.
17:03 JoeJulian mikemol: good question. I don't see why it would have to be. The replaced brick would be built from the remaining bricks in the disperse subvolume.
17:04 ToMiles or should I first upgrade them all?
17:04 JoeJulian I'd upgrade them all and see if that fixes it.
17:04 steveeJ mikemol: sounds like I need to play around with my first volume before I start to put data on it :)
17:05 JoeJulian Then blog about your findings and become internet famous.
17:06 ToMiles Is that general advice too for minor version upgrades?
17:06 mikemol steveeJ: That would be wise.
17:08 mikemol JoeJulian: So, in reality, I should be able to remove a number of bricks not equal to N, so long as I don't exceed my redundancy or replica. It'd be akin to RAID's degraded mode.
17:08 mikemol And like RAID's degraded mode, I better darn well get bricks back in place...
17:09 mikemol And on that note, the only thing I've encountered about dispersion volumes in 3.7 that bugs me is the lack of support from the self-heal daemon. Near as I can figure, I have to read every byte of every file to be certain the volume is fully healed.
17:10 JoeJulian Well, not through remove-brick you couldn't. You can replace-brick to a different machine, or you can just kill the brick process for the brick on the drive you're replacing, replace it, make the filesystem and stuff, then start it again with "gluster volume start $vol force" and it will recreate the volume-id and whatnot.
17:11 JoeJulian mikemol: what? No way. That makes no sense.
17:12 mikemol JoeJulian: I had two bricks' glusterd process sigsegv on me Saturday. Started them back up, seemed fine. But I couldn't find a clear way to know whether or not the volume had really recovered.
17:12 ToMiles JoeJulian: rsync fixed it, maybe the version difference caused a checksum diff before when regenerating from an empty /var/lib/glusterd
17:13 JoeJulian probably
17:13 JoeJulian mikemol: gluster volume heal $vol info didn't tell you anything?
17:14 ToMiles Thanks for the help, now on to upgrading the other peers
17:14 mikemol It didn't tell me anything was probably healing. And there didn't seem to be any outstanding tasks. Which was absurd, under the circumstances.
17:15 mikemol I was modifying performance options while the volume was actively being read from and written to. (bareos was streaming from a spooling file and into a tape file on the same gluster volume.)
17:18 JoeJulian I can think of a couple of ways that it would actually be possible that you wouldn't have had any need to heal in that scenario.
17:18 JoeJulian Did you file a bug for the segfaults?
17:18 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:21 mhulsman joined #gluster
17:43 tomatto joined #gluster
17:45 ctria joined #gluster
17:50 josh__ joined #gluster
17:51 josh__ can anyone tell me if there is a benefit to mounting a gluster volume as glusterfs instead of nfs?  if so, what is the benefit?  thanks!
17:52 ivan_rossi left #gluster
17:54 shyam joined #gluster
17:58 kovshenin joined #gluster
17:59 JoeJulian josh__: nfs does not give you HA, nor consistency. A fuse mount does. Even better, if you are the application developer, would be to use the api (libgfapi) which avoids the context switches of fuse while offering all the advantages.
18:01 haomaiwa_ joined #gluster
18:03 ndevos ping purpleidea: do you know if arbiter volumes are supported in your puppet module
18:06 Rapture joined #gluster
18:24 autostatic JoeJulian: Thanks for the pointer!
18:25 purpleidea ndevos: i've never tested it, but someone added it: https://github.com/purpleidea/puppet-gluster/commit/69d03aa824d7daf00a6a7c7bbfc7ec9ef159d394
18:25 glusterbot Title: add support for creating gluster with an arbiter · purpleidea/puppet-gluster@69d03aa · GitHub (at github.com)
18:25 ndevos purpleidea: ok, thanks!
18:26 purpleidea ndevos: come to think of it, this is also missing a small patch to gluster::simple to pass through the arbiter variable... patches welcome, although it should already work without gluster::simple atm.
18:27 purpleidea or hmmm
18:27 purpleidea idk
18:27 purpleidea anyways i haven't tested it
18:27 ndevos purpleidea: uh, yeah, but I'm trying to stay awat from ruby (and puppet)
18:27 ndevos *away even
18:29 josh__ JoeJulian: thanks. this might be a dumb question, but if i mount using glusterfs and point to a specific host in the cluster, is it smart enough to switch hosts if that host goes down or do i need to create floating ip(s) to point at?
18:30 dblack joined #gluster
18:31 Telsin joined #gluster
18:32 mlncn joined #gluster
18:35 nage joined #gluster
18:36 JoeJulian ~mount server | josh__
18:36 glusterbot josh__: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
18:46 josh__ thank you
18:51 Gill_ joined #gluster
18:58 Telsin left #gluster
19:01 DV joined #gluster
19:01 haomaiwa_ joined #gluster
19:03 Merlin_ joined #gluster
19:14 RedW joined #gluster
19:21 mikemol JoeJulian: No, I did not. I was remotely responding to an emergency at the same time as being responsible for both my kids. But I do need to find out how best to report those.
19:22 mikemol (the segfaults)
19:31 DV joined #gluster
19:47 mhulsman joined #gluster
19:51 mlncn joined #gluster
19:58 F2Knight joined #gluster
19:58 PinkFrood joined #gluster
19:58 _NiC joined #gluster
19:58 PaulePan1er joined #gluster
19:58 abyss^ joined #gluster
19:58 yoavz joined #gluster
20:01 haomaiwa_ joined #gluster
20:16 mlhamburg1 joined #gluster
20:19 F2Knight joined #gluster
20:25 ahino joined #gluster
20:33 kovshenin joined #gluster
20:35 DV joined #gluster
20:51 JoeJulian mikemol: been there, done that, from my phone in the middle of the mall. Ugh.
21:01 haomaiwa_ joined #gluster
21:10 gildub_ joined #gluster
21:11 bowhunter joined #gluster
21:15 Merlin_ joined #gluster
21:19 siel joined #gluster
21:20 F2Knight joined #gluster
21:47 m0zes joined #gluster
22:00 m0zes joined #gluster
22:01 haomaiwa_ joined #gluster
22:02 tomatto joined #gluster
22:23 Merlin_ joined #gluster
22:28 bluenemo joined #gluster
22:28 ctria joined #gluster
22:50 ctria joined #gluster
22:51 Gill_ joined #gluster
22:51 edwardm61 joined #gluster
22:54 msvbhat joined #gluster
22:54 mlncn_ joined #gluster
22:54 rp_ joined #gluster
22:55 partner joined #gluster
22:55 Pharaoh_Atem joined #gluster
22:55 NuxRo joined #gluster
22:55 kblin joined #gluster
22:55 kblin joined #gluster
22:55 JonathanS joined #gluster
22:56 mjrosenb joined #gluster
22:56 a2 joined #gluster
22:59 necrogami joined #gluster
22:59 necrogami joined #gluster
22:59 mmckeen joined #gluster
22:59 ashka joined #gluster
23:00 shortdudey123 joined #gluster
23:01 bfoster1 joined #gluster
23:01 eljrax joined #gluster
23:01 haomaiwang joined #gluster
23:01 semiosis joined #gluster
23:02 cristian joined #gluster
23:02 autostatic joined #gluster
23:07 cholcombe joined #gluster
23:07 mlhess joined #gluster
23:07 Champi joined #gluster
23:07 DRoBeR joined #gluster
23:08 yosafbridge joined #gluster
23:08 ghenry joined #gluster
23:08 ghenry joined #gluster
23:19 zhangjn joined #gluster
23:32 daMaestro joined #gluster
23:34 Guest36819 joined #gluster
23:43 rjoseph joined #gluster
23:44 dmnchild joined #gluster
23:45 dmnchild joined #gluster
23:49 bennyturns joined #gluster
23:51 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary