Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 plarsen joined #gluster
00:41 riyas joined #gluster
00:51 moneylotion joined #gluster
00:55 papna joined #gluster
00:56 niknakpaddywak joined #gluster
00:58 tru_tru joined #gluster
00:58 Champi joined #gluster
00:58 portante joined #gluster
01:23 kblin joined #gluster
01:32 daMaestro joined #gluster
01:34 shdeng joined #gluster
01:51 derjohn_mob joined #gluster
02:31 riyas joined #gluster
02:34 papna joined #gluster
02:48 ahino joined #gluster
02:51 sanoj joined #gluster
02:59 gyadav joined #gluster
03:14 csd__ joined #gluster
03:26 ashiq joined #gluster
03:31 nbalacha joined #gluster
03:34 magrawal joined #gluster
03:42 prasanth joined #gluster
03:42 riyas joined #gluster
03:44 buvanesh_kumar joined #gluster
03:59 atinm joined #gluster
04:01 dominicpg joined #gluster
04:03 itisravi joined #gluster
04:03 Intensity joined #gluster
04:06 Karan joined #gluster
04:18 ashiq joined #gluster
04:27 apandey joined #gluster
04:30 ppai joined #gluster
04:31 gyadav joined #gluster
04:38 sanoj joined #gluster
04:53 skumar joined #gluster
04:53 derjohn_mob joined #gluster
05:03 Shu6h3ndu joined #gluster
05:04 ppai joined #gluster
05:08 aravindavk joined #gluster
05:13 karthik_us joined #gluster
05:20 msvbhat joined #gluster
05:27 jiffin joined #gluster
05:33 ankitr joined #gluster
05:37 Philambdo joined #gluster
05:45 sbulage joined #gluster
05:58 Saravanakmr joined #gluster
06:01 _KaszpiR_ joined #gluster
06:01 susant joined #gluster
06:09 Humble joined #gluster
06:12 rafi joined #gluster
06:17 hgowtham joined #gluster
06:19 aravindavk joined #gluster
06:22 jtux joined #gluster
06:22 susant joined #gluster
06:26 mbukatov joined #gluster
06:28 sona joined #gluster
06:28 mbukatov joined #gluster
06:31 ppai joined #gluster
06:36 kdhananjay joined #gluster
06:41 jkroon joined #gluster
06:42 _KaszpiR_ joined #gluster
06:42 jtux left #gluster
06:44 sona joined #gluster
06:44 ivan_rossi joined #gluster
06:45 msvbhat joined #gluster
06:47 derjohn_mob joined #gluster
06:50 hgowtham joined #gluster
07:03 Philambdo joined #gluster
07:08 Saravanakmr joined #gluster
07:13 derjohn_mob joined #gluster
07:14 fsimonce joined #gluster
07:15 ankitr joined #gluster
07:19 hgowtham joined #gluster
07:20 ayaz joined #gluster
07:25 flying joined #gluster
07:31 jwd joined #gluster
07:36 karthik_us joined #gluster
07:42 apandey joined #gluster
07:47 Humble joined #gluster
07:58 hgowtham joined #gluster
08:00 jkroon joined #gluster
08:08 skoduri joined #gluster
08:08 hybrid512 joined #gluster
08:14 ahino joined #gluster
08:25 armyriad joined #gluster
08:35 Humble joined #gluster
08:35 rafi joined #gluster
08:38 tru_tru joined #gluster
08:43 jtux joined #gluster
08:45 rastar joined #gluster
09:06 jiffin1 joined #gluster
09:10 Peppard joined #gluster
09:18 buvanesh_kumar joined #gluster
09:19 MrAbaddon joined #gluster
09:32 msvbhat joined #gluster
09:41 itisravi joined #gluster
09:42 saybeano joined #gluster
09:43 sona joined #gluster
09:47 sbulage joined #gluster
09:50 ankitr joined #gluster
09:51 kotreshhr joined #gluster
09:58 jiffin1 joined #gluster
10:01 decayofmind Hi! Is the practical difference between using 3rd replica and an arbiter node only in amount of data present on that node?
10:04 jiffin itisravi: ^^
10:06 itisravi decayofmind: yes, plus the availability of arbiter is less when compared to replica 3 because the 3rd brick does not host data.
10:07 decayofmind itisravi: so, if have a lot of resources to just provision a 3rd node, there's no need in arbiter?
10:08 itisravi decayofmind: yes if reserving 3x  disk space is not a problem, replica-3 is the way to go.
10:08 mhutter joined #gluster
10:09 decayofmind thx!
10:12 cloph (assuming you the network bandwidth for that third copy of the data is not an issue)
10:12 ashiq joined #gluster
10:13 decayofmind cloph: you mean an initial bandwidth to sync my files or the bandwidth in general? cause the client will write a file 3 times
10:13 decayofmind ?
10:13 Karan joined #gluster
10:13 Wizek_ joined #gluster
10:15 cloph yes, data is transferred to all servers obviously, so you need to copy data to all the servers, aka transfer it three times (at the same time)
10:19 ppai joined #gluster
10:33 ankitr joined #gluster
10:36 ankitr joined #gluster
10:52 Klas cloph: not from client though?
10:52 Klas I assume gluster recieves a copy and then writes it to all volumes?
10:57 rafi joined #gluster
10:59 ndevos Klas: a Gluster client (FUSE mount, NFS-server, Samba server, qemu/gfapi-process, ...) does the replication and will write 3x the data (or 2x with arbiter)
10:59 Klas oh
11:00 Klas why?
11:00 Klas it seems like a strange way of doing things, imo
11:00 Klas (I believe you though)
11:00 ndevos because all the logic is in the Gluster client, in Gluster 4.0 there will be an option for journal based replication (JBR) that can do sever-side replication
11:01 Klas ah
11:01 Klas that does make a certain amount of sense
11:01 Klas so, basically, gluster only syncs file as a "heal" option so to speak?
11:02 ndevos no, "healing" is only for recovery purposes
11:02 Klas or is that up to the client as well? One server sends file answer, then client sends it to the other server?
11:02 ndevos when a gluster client writes a file, it actually writes to 3 (or 2) bricks synchronously
11:02 Klas yes, but if a brick is missing?
11:03 Klas how does it recover when it's back?
11:03 bartden joined #gluster
11:03 Klas it seems weird if that should be up to the client
11:03 ndevos if a brick is missing (should be an excepion) and writes are done, a changelog will indicate that the file needs healing
11:03 Klas and the changelog is handled by the server?
11:04 ndevos when the brick comes back, the self-heal-daemon will heal the contents
11:04 ndevos yes, the changelog is tracked on the bricks
11:04 Klas and, yes, I know it's an exception case, but it's also part of the reason we are running it, to be able to lose one ;)
11:04 bartden Hi, when direct-o-mode on and direct-io mount option is used on client, calculating a hash takes quiet some additional time? Any explanation why?
11:06 ndevos bartden: direct-io means that caches are tried to be skipped where possible, so no speedups in reading the just written data, all data needs to be transfered over the network for every read (and write)
11:06 * ndevos goes for lunch
11:07 Klas ndevos: thanks for good info anyhow =)
11:07 bartden ndevos and can strict-o-direct influence this?
11:07 susant left #gluster
11:12 rafi joined #gluster
11:24 ashiq joined #gluster
11:25 sona joined #gluster
11:25 gospod2 guys is it possible to set up gluster sync replica 2 with only one way replication?
11:31 edong23 joined #gluster
11:32 jiffin gospod2: can u please elaborate?
11:33 gospod2 i just want one KVM server with local SSDs to replicate its datastore on SSDs to another backup server with identical SSDs on LAN
11:34 gospod2 i set up replica 2 and when either node is down, glusterfs is not accessible (i made my study and I know this is the desired effect for most)
11:34 skumar joined #gluster
11:35 gospod2 i dont need any fancy clustering, only replication :)
11:38 gospod2 jiffin, possible?
11:38 kkeithley in a 'replica 2' volume the replication is done from the client. The client writes to both
11:38 kkeithley Use geo-replication to do what you want
11:38 gospod2 geo replica is async, is sync possible with geo replica'
11:38 gospod2 ?*
11:38 kkeithley no
11:39 kkeithley server-side replication (coming in 3.11 maybe) will do what you want too
11:40 gospod2 whats the difference between writing to local brick dir and glusterfs mounted dir? When I write to local brick dir it takes some time for files to appear on the other end
11:41 kkeithley don't ever (ever ever ever) write directly to the brick
11:41 Klas gospod2: I think you could accomplish that with 66% quorum requirements and 2 votes on one of the nodes
11:41 gospod2 kkeithley, thanks !
11:41 Klas or, hmm, no, that would put it in read-only
11:42 Klas wouldn't that be acceptable though?
11:42 gospod2 Klas, that would also be the desired effect... read only on the 2nd node is acceptable for me?
11:42 Klas then replica 2 with quoruom auto is sufficient
11:42 gospod2 1st node rw, 2nd node ro - is this what u mean?
11:42 Klas as long as one of the bricks has got 2 votes
11:42 Klas both nodes up=rw
11:42 Klas 2 vote node up=rw
11:43 Klas 1 vote up=r
11:43 Klas o
11:43 gospod2 yeah
11:43 gospod2 but only one node is locked any time for 2 votes right?
11:43 Klas yeah, you manually set vote per brick, unless I'm mistaken
11:44 gospod2 fantastic ! got a link for those commands '
11:44 gospod2 ?*
11:44 Klas nope, sorry =P
11:44 sona joined #gluster
11:44 Klas never worked with votes
11:44 Klas or, rather, in gluster
11:44 Klas I did in other quorum-based clusters
11:44 gospod2 all I need to set then, 3 votes max and quorum auto
11:44 gospod2 ?
11:44 Klas should be fine, yes
11:45 gospod2 Klas, thanks!
11:45 Klas and, trust me, but verify first ;)
11:45 Klas (meaning, don't do this in production without testing first, goest without saying =P)
11:45 Klas but the vote count is essential to quorum mechanics
11:47 gospod2 yeah I will, thanks. just to finalize the thought in my brain, if 2nd node goes down (1st node got 2 votes), then I disconnect NIC cables and bring 2nd node back up, now 2nd will be forever in RO until I connect cables back so it sees the 2nd node having 2 votes?
11:48 Klas sounds right
11:48 gospod2 yeah Im testing everything before deploying in VMs ;)
11:48 Klas I would just kill gluster processes
11:48 Klas but cables work as well ;)
11:48 gospod2 ja :P
11:50 gospod2 now everything I need is to set 3 vote quorum in possibly to always try to force prefer for KVM node to have 2 votes when both are online
11:50 gospod2 whats the terminology here? primary/secondary nodes?
11:54 kotreshhr left #gluster
12:00 ira joined #gluster
12:03 shyam joined #gluster
12:03 kpease joined #gluster
12:05 karthik_us joined #gluster
12:06 Klas you WANT a master/slave logic, but in reality, I don't think there is a proper term
12:07 Klas votes are generally meant to be used a bit differently ;)
12:24 msvbhat joined #gluster
12:27 skoduri joined #gluster
12:34 susant joined #gluster
12:50 jiffin joined #gluster
12:51 nbalacha joined #gluster
13:04 buvanesh_kumar joined #gluster
13:05 buvanesh_kumar joined #gluster
13:06 gospod2 Klas, I just set cluster.quorum-type fixed and cluster.quorum-count 1 -> and the mount is still not accessible when one node goes offline
13:06 buvanesh_kumar joined #gluster
13:28 vbellur joined #gluster
13:29 vbellur joined #gluster
13:31 skylar joined #gluster
13:47 vbellur joined #gluster
13:48 vbellur joined #gluster
13:48 vbellur joined #gluster
13:49 vbellur joined #gluster
13:50 vbellur joined #gluster
13:50 vbellur joined #gluster
13:51 vbellur joined #gluster
13:54 plarsen joined #gluster
13:55 jtux left #gluster
13:56 mlg9000 joined #gluster
13:58 susant joined #gluster
13:58 susant left #gluster
14:03 bchilds joined #gluster
14:04 Philambdo joined #gluster
14:07 jdossey joined #gluster
14:18 gyadav joined #gluster
14:24 lg543 joined #gluster
14:25 ankitr joined #gluster
14:26 baber joined #gluster
14:29 JoeJulian gospod2: Check your client log. Are you sure your client is reaching all the servers?
14:30 aravindavk joined #gluster
14:32 MrAbaddon joined #gluster
14:36 gyadav joined #gluster
14:37 Karan joined #gluster
14:41 mlhess joined #gluster
14:52 farhorizon joined #gluster
14:52 rwheeler joined #gluster
14:56 Limebyte Hey guys
14:56 Limebyte JoeJulian, why does GlusterFS mount something with Fuse
14:56 Limebyte when I add a brick running on OVZ?
14:57 Limebyte I am not talking about to mount storage with the gluster-client
15:07 ankitr joined #gluster
15:07 tanner_ joined #gluster
15:12 tanner_ is it possible to import volumes into heketi?
15:13 wushudoin joined #gluster
15:13 tanner_ the auto-mount system I setup for gluster queries the heketi API, but the snapshot cloning system I setup is done outside heketi
15:13 nbalacha joined #gluster
15:25 riyas joined #gluster
15:25 derjohn_mob joined #gluster
15:27 Saravanakmr joined #gluster
15:28 Limebyte JoeJulian, so I guess I need KVM then?
15:37 susant joined #gluster
15:38 buvanesh_kumar joined #gluster
15:45 gyadav joined #gluster
15:48 kkeithley @php
15:48 glusterbot kkeithley: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
15:48 glusterbot kkeithley: --fopen-keep-cache
15:49 ankitr joined #gluster
15:53 gospod2 JoeJulian, you were right, thanks! one of the nodes changed IP from 49152 to 49153 and I guess firewall was blocking it. dont know how that happened, they are fresh centos installed just for this testing
15:55 dgandhi joined #gluster
15:56 nbalacha joined #gluster
16:01 skumar joined #gluster
16:07 rafi joined #gluster
16:08 prasanth joined #gluster
16:09 buvanesh_kumar joined #gluster
16:10 vbellur joined #gluster
16:11 Vapez joined #gluster
16:18 ankitr joined #gluster
16:22 rafi joined #gluster
16:37 plarsen joined #gluster
16:39 gyadav joined #gluster
16:46 jkroon joined #gluster
16:48 gospod2 JoeJulian, im clueless now. I reformated VMs with IDENTICAL commands as before and now its not working like before.
16:48 buvanesh_kumar joined #gluster
16:48 gospod2 both ports are 49152
16:50 ankitr joined #gluster
16:52 gospod2 Im talking too fast... I just takes a little time when doing for the first time "ls" or when mounting when 1 node is down. is this normal?
16:57 gospod2 this is not the effect im after, as KVM guests will break with this
17:33 Saravanakmr joined #gluster
17:46 susant joined #gluster
17:49 ahino joined #gluster
17:55 tanner_ solved my first problem, now another: in my replication script I stop the volumes, delete them all, recreate and start them
17:55 tanner_ this leaves the clients in an unmounted state
17:56 susant left #gluster
17:56 tanner_ is there a way to tell the client to auto-retry mounting?
18:07 Saravanakmr joined #gluster
18:26 Limebyte Guys? JoeJulia semiosis any xattrs workarround for OpenVZ?
18:42 Karan joined #gluster
18:45 bit4man joined #gluster
18:48 baber joined #gluster
18:54 shemmy joined #gluster
18:56 farhorizon joined #gluster
18:59 shemmy Gluster noob question: I have two server nodes running a django app, and I'd like each to host a replicated copy.  Most of the docs seem to run with the assumption that a separate devices are used to setup volumes/bricks.
18:59 farhorizon joined #gluster
19:00 shemmy When mounting volumes... there seems to be some sort if size limit? But I haven't set up a quota.
19:00 shemmy But I'm not mounting a separate device. I'm pointing to a specific folder on the server node as a "volume" so to speak...
19:01 shemmy It seems to work aside from the tight size limitation. Does this setup make sense, or am I forced to mount a separate storage device on each node to set gluster up with?
19:02 shemmy I'd like each to host a replicated copy of static/media assets ***
19:12 vbellur joined #gluster
19:27 sona joined #gluster
19:32 farhorizon joined #gluster
20:19 derjohn_mob joined #gluster
20:34 farhorizon joined #gluster
21:02 shyam left #gluster
21:11 MrAbaddon joined #gluster
21:34 om2 joined #gluster
21:35 bchilds joined #gluster
21:44 farhorizon joined #gluster
21:54 jjasinski joined #gluster
21:59 wushudoin| joined #gluster
22:00 wushudoin joined #gluster
22:27 Wizek_ joined #gluster
22:27 vbellur joined #gluster
22:28 plarsen joined #gluster
22:32 skylar joined #gluster
22:46 JoeJulian Limebyte: There is some way to, and I don't use OVZ so I can't point you at it directly, allow fuse mounts inside the container.
22:47 JoeJulian gospod2: Not really, but I suspect you may have had a self-heal issue.
23:10 shyam joined #gluster
23:24 q1x joined #gluster
23:33 shyam left #gluster
23:37 gospod2 JoeJulian still here? what do you mean by self heal issue? im running in circles here trying to solve this and im still clueless
23:39 JoeJulian My guess being that you had a client that you were testing with that was only connected to one server and even that server couldn't connect to the other server's brick. That would have caused all the data to only be written to a single server. Later, you fixed that problem and connected to both but the first moment you checked, it was connected to the empty brick and not yet connected to the one with any data. This is all just speculation but it
23:39 JoeJulian might explain why you saw no data at first.
23:40 gospod2 hmm i think its not that
23:42 gospod2 JoeJulian, situation im in is I have a KVM server, without any clustering needs whatsoever, I only want to replicate local datastore with identical SSDs on another server. Only this KVM server will be writing to the local brick, I can even lock RO on the other server if I need to. Im testing this inside VMs atm before deploying and I cant get a smooth "while ls sleep 1 loop" without freezing right when other server disconnects.
23:43 gospod2 georeplica is kind of what I need, but I want true sync (not async) and copying over SSH I guess is slow even on LAN.
23:44 gospod2 gluster client is the same gluster server (KVM server)
23:59 JoeJulian gospod2: Writing to the local brick is the functional equivalent of pointing qemu to a random block on the hard drive and expecting xfs, ext4, etc to know what to do with that data.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary