Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-12-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 ic0n joined #gluster
00:24 pdrakeweb joined #gluster
00:32 ic0n joined #gluster
00:50 ic0n joined #gluster
01:28 Vapez joined #gluster
01:41 ic0n joined #gluster
01:56 anthony25 joined #gluster
01:59 ic0n joined #gluster
02:01 gospod3 joined #gluster
02:41 msvbhat joined #gluster
02:41 ic0n joined #gluster
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:01 shellclear_ joined #gluster
03:02 gospod4 joined #gluster
03:11 ic0n joined #gluster
03:18 gyadav joined #gluster
03:21 ppai joined #gluster
03:30 ic0n joined #gluster
03:44 jiffin joined #gluster
03:58 psony|afk joined #gluster
04:02 shellclear_ joined #gluster
04:03 itisravi joined #gluster
04:09 zcourts joined #gluster
04:11 nbalacha joined #gluster
04:24 msvbhat joined #gluster
04:27 pdrakeweb joined #gluster
04:50 skumar joined #gluster
04:54 karthik_us joined #gluster
05:13 sunny joined #gluster
05:16 Shu6h3ndu joined #gluster
05:23 ompragash joined #gluster
05:37 ompragash_ joined #gluster
05:48 vishnu_kunda joined #gluster
05:50 hgowtham joined #gluster
05:57 nbalacha joined #gluster
06:02 Saravanakmr joined #gluster
06:04 sunnyk joined #gluster
06:05 skumar joined #gluster
06:13 skumar joined #gluster
06:18 skumar joined #gluster
06:21 msvbhat joined #gluster
06:24 skumar joined #gluster
06:26 sunkumar joined #gluster
06:38 msvbhat joined #gluster
06:58 kdhananjay joined #gluster
07:17 jtux joined #gluster
07:29 vishnu_sampath joined #gluster
07:37 Rakkin_ joined #gluster
07:37 jkroon__ joined #gluster
07:40 aravindavk joined #gluster
07:44 omark1 joined #gluster
07:46 omark11 joined #gluster
07:48 voidm joined #gluster
07:52 msvbhat joined #gluster
07:55 drymek joined #gluster
07:56 major joined #gluster
08:01 Rakkin_ joined #gluster
08:04 zcourts_ joined #gluster
08:08 msvbhat joined #gluster
08:29 omark1 joined #gluster
08:30 sunnyk joined #gluster
08:31 omark2 joined #gluster
08:33 ompragash_ joined #gluster
08:42 _KaszpiR_ joined #gluster
08:47 omark1 joined #gluster
08:47 poornima joined #gluster
08:48 omark1 joined #gluster
08:48 drymek joined #gluster
08:59 atinm joined #gluster
09:01 buvanesh_kumar joined #gluster
09:05 major joined #gluster
09:13 sahina joined #gluster
09:31 aravindavk joined #gluster
09:31 nisroc joined #gluster
09:37 marc_888 joined #gluster
09:41 sunny joined #gluster
09:52 sanoj joined #gluster
09:53 Rakkin_ joined #gluster
09:55 poornima joined #gluster
09:58 drymek joined #gluster
10:04 MrAbaddon joined #gluster
10:13 sahina_ joined #gluster
10:13 msvbhat joined #gluster
10:22 ompragash_ joined #gluster
10:27 sahina_ left #gluster
10:30 karthik_us joined #gluster
10:34 vkunda joined #gluster
10:38 poornima joined #gluster
10:49 msvbhat_ joined #gluster
10:58 kale i have set up a replica 3 with arbiter 1. does the gluster client read half from each of the two servers in that case?
10:59 drymek joined #gluster
11:05 vkunda_ joined #gluster
11:13 anthony25 joined #gluster
11:16 nisroc joined #gluster
11:34 poornima joined #gluster
11:47 TBlaar2 joined #gluster
11:48 shellclear_ joined #gluster
11:58 vkunda joined #gluster
12:16 rouven joined #gluster
12:25 rouven_ joined #gluster
12:28 vkunda joined #gluster
12:30 vkunda_ joined #gluster
12:32 vkunda__ joined #gluster
12:44 p7mo joined #gluster
12:51 bluenemo joined #gluster
12:52 rouven_ joined #gluster
12:55 pdrakeweb joined #gluster
13:06 bluenemo joined #gluster
14:14 sunny joined #gluster
14:24 gyadav joined #gluster
14:24 zcourts joined #gluster
14:32 zcourts_ joined #gluster
14:33 aravindavk joined #gluster
14:37 bluenemo joined #gluster
14:41 voidm joined #gluster
14:42 sunny joined #gluster
14:48 jiffin joined #gluster
15:05 psony|afk joined #gluster
15:14 buvanesh_kumar joined #gluster
15:34 bluenemo joined #gluster
15:47 zcourts joined #gluster
15:49 zcourts__ joined #gluster
15:57 Shu6h3ndu joined #gluster
16:02 rouven joined #gluster
16:26 bluenemo joined #gluster
16:38 shellclear_ joined #gluster
16:38 logan- joined #gluster
16:40 pdrakeweb joined #gluster
16:52 jiffin1 joined #gluster
16:54 shellclear joined #gluster
17:01 zcourts joined #gluster
17:05 MrAbaddon joined #gluster
17:31 jiffin1 joined #gluster
17:33 gyadav joined #gluster
17:48 jiffin1 joined #gluster
17:52 jiffin joined #gluster
17:54 jiffin2 joined #gluster
18:12 major joined #gluster
18:22 sunny joined #gluster
18:30 brettnem joined #gluster
18:32 brettnem Hello all, I’m trying to setup a gluster configuration where the mountpoint is highly available and geographically redundant. If I do 1 datacenter with 2 gluster servers with replica 2, this works great, meets my need.. BUT I really want the replica in another datacenter. When I do this, it works, but every kind of access is so so painfully slow. I looked into geo replication, but the replia it makes isn’t shared by the mount
18:32 brettnem isn’t “hot available”. What’s the right way to do this?
19:00 JoeJulian brettnem: There is no good way. You have to give up consistency if you want availability and partition tolerance.
19:01 brettnem @JoeJulian: I’m not terribly concerned about consistency. This is a write once read many workflow. Is there a way to accomidate that wth gluster?
19:01 JoeJulian https://en.wikipedia.org/wiki/PACELC_theorem
19:01 glusterbot Title: PACELC theorem - Wikipedia (at en.wikipedia.org)
19:01 JoeJulian Can you read from a different path than your write?
19:02 brettnem Technically I think yes.. but I always have to read from the same spot
19:02 JoeJulian One possibility, if you can, is that you write to the slow mount, but read from a georeplica.
19:03 brettnem The problem is that my client app that is reading the data is always going to expect the file at a certain mountpoint
19:03 brettnem if that mountpoint is attached to a failed gluster, the file is gone
19:03 brettnem I feel like this has to be a feature of the block storage client
19:03 JoeJulian Have a single "master" cluster that you mount everywhere and write to, but have geo-replicas everywhere that the master syncs to.
19:04 JoeJulian Those geo-replicas is what your application would read from.
19:05 brettnem ok but.. if I have say, two geo-replicas.. how do I make on my client machine /data/file.txt read from geo-replica 1, but if it’s down, read from geo-replica 2
19:06 brettnem my client is just going to open(/data/file.txt)… the only way I know how to use gluster is to mount a gluster volume
19:06 brettnem unless the client machine IS the geo replica?
19:06 JoeJulian exactly
19:07 brettnem oh ok.. so I write to the gluster volume, and read basically off the local file system which is maintained eventual consistency with geo replication?
19:07 JoeJulian That's what I was thinking, yes.
19:09 brettnem hmm.. ok.. I think that could work. How resiliant is geo-replication?
19:12 brettnem any other way to do it? This would require the full volume of storage on each client server
19:12 brettnem I thought best practices said not to do this? :)
19:13 JoeJulian Well you wouldn't *have* to have it on each client, just a cluster of machines in each datacenter (enough to meet your SLA).
19:15 TBlaar joined #gluster
19:17 brettnem if it wasn’t on each client, how would every client reach the data?
19:17 jiffin joined #gluster
19:18 JoeJulian Mount their local geo-replicated replica volume.
19:18 JoeJulian So you have a master volume somewhere. It geo-replicates to the remote-dc replica volumes.
19:18 JoeJulian The remote-dc clients mount their local-dc replica volume for reading.
19:21 pdrakewe_ joined #gluster
19:22 pdrakew__ joined #gluster
19:23 brettnem can you use gluster to make a volume out of the endpoints of georeplication? Maybe that’s the part I’m missing
19:24 pdrakeweb joined #gluster
19:28 pdrakewe_ joined #gluster
19:34 brettnem @JoeJulian ^^
19:34 JoeJulian You can make a volume at the endpoints and geo-replicate to that volume.
19:35 brettnem when you do this.. do you geo replicate to BOTH replicas of the volume?
19:35 brettnem sounds racey
19:35 brettnem racey or magical
19:35 JoeJulian No, you just geo-replicate to the volume.
19:35 pdrakeweb joined #gluster
19:36 brettnem I thought a geo replicate operation points via ssh to a specific server and path
19:36 JoeJulian Essentially, the geo-replicate application mounts the remote volume and syncs the files as needed to that volume.
19:36 JoeJulian through an ssh tunnel
19:37 brettnem @JoeJulian ok so if my local DC has replica 2… I just geo replicate with just 1 of them as the endpoint/volume?
19:37 pdrakeweb joined #gluster
19:37 JoeJulian You'll configure geo-replicate with the volume information, not any one brick.
19:38 arpu hi how can i use memory as cache?
19:38 pdrakewe_ joined #gluster
19:38 RustyB joined #gluster
19:39 JoeJulian "gluster volume set help" has a number of cache parameters you can fidget with.
19:39 Somedream joined #gluster
19:40 shellclear joined #gluster
19:40 brettnem @JoeJulian: I see this gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> create no-verify [force]….. which suggests I have to point to a single brick? or is that just one peer in the cluster?
19:41 JoeJulian Right, slave_host is one peer (or a round-robin dns entry with each peer).
19:42 JoeJulian Just like your local client, geo-replicate uses that peer to retrieve the volume definition then uses the ssh tunnel to connect directly to all the bricks.
19:42 arpu ok thx will try but i only found cache-size
19:42 arpu is this used in memory or is this a local disk cache?
19:42 JoeJulian memory
19:42 JoeJulian There is no local disk cache.
19:42 arpu oh ok
19:42 arpu how can i see what is cached?
19:43 JoeJulian state dump, maybe
19:43 JoeJulian I've never had that question come up before. Good one.
19:43 PotatoGim joined #gluster
19:44 fury joined #gluster
19:44 JoeJulian The server will only have caches for open FDs, so the state dump is the most likely source of information I can think of for that.
19:45 JoeJulian If it's not in there, that would make a good feature request. File a bug if it's not there.
19:45 wolsen joined #gluster
19:45 twisted` joined #gluster
19:45 JoeJulian (ugh... I didn't do that trigger case insensitive.) file a bug
19:45 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
19:45 fabianvf joined #gluster
19:46 lunaaa joined #gluster
19:49 billputer joined #gluster
19:49 AppStore joined #gluster
19:50 wlmbasson joined #gluster
19:52 bitonic joined #gluster
20:02 owlbot joined #gluster
20:09 MrAbaddon joined #gluster
20:11 pdrakeweb joined #gluster
20:13 pdrakeweb joined #gluster
20:14 arpu hmm cannot find the dumpfile :/ ubuntu system
20:14 JoeJulian /var/run/glusterd iirc.
20:15 portdirect joined #gluster
20:17 pdrakeweb joined #gluster
20:18 arpu yes thx!
20:21 jkroon joined #gluster
20:21 arpu many informations
20:23 pdrakeweb joined #gluster
20:41 bluenemo joined #gluster
20:45 Rakkin_ joined #gluster
21:06 Vapez joined #gluster
21:07 vbellur joined #gluster
21:29 drymek joined #gluster
21:30 mikedep333 joined #gluster
21:56 Vapez_ joined #gluster
22:03 jobewan joined #gluster
22:04 pdrakeweb joined #gluster
22:06 rouven joined #gluster
22:10 Vapez joined #gluster
22:37 rouven joined #gluster
22:37 jobewan joined #gluster
22:58 ic0n joined #gluster
23:02 jobewan joined #gluster
23:05 pdrakeweb joined #gluster
23:36 jobewan joined #gluster
23:44 ic0n joined #gluster
23:49 shellclear joined #gluster
23:51 zcourts joined #gluster
23:59 Rakkin_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary