Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-12-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 masber joined #gluster
00:29 kpease joined #gluster
00:29 rouven joined #gluster
00:34 rouven joined #gluster
00:42 shyam joined #gluster
00:44 rouven joined #gluster
00:49 rouven joined #gluster
00:52 shyu joined #gluster
01:10 gospod2 joined #gluster
01:19 gospod2 joined #gluster
01:19 rouven joined #gluster
01:24 rouven joined #gluster
01:56 rouven_ joined #gluster
01:59 kettlewell joined #gluster
02:02 gospod3 joined #gluster
02:36 ompragash joined #gluster
02:59 nishanth joined #gluster
03:01 ilbot3 joined #gluster
03:01 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:09 sunny joined #gluster
03:15 shortdudey123 joined #gluster
03:19 gyadav joined #gluster
03:38 masber joined #gluster
03:38 shyam joined #gluster
03:41 prasanth joined #gluster
03:43 itisravi joined #gluster
03:49 rouven joined #gluster
03:59 rouven joined #gluster
04:00 psony joined #gluster
04:04 rouven joined #gluster
04:05 kettlewell joined #gluster
04:06 ppai joined #gluster
04:09 rouven joined #gluster
04:12 Shu6h3ndu joined #gluster
04:19 Shu6h3ndu_ joined #gluster
04:25 Shu6h3ndu joined #gluster
04:35 kramdoss_ joined #gluster
04:48 atinm joined #gluster
04:49 jiffin joined #gluster
04:49 rouven joined #gluster
04:54 rouven joined #gluster
04:55 rastar joined #gluster
05:01 sunny joined #gluster
05:03 karthik_us joined #gluster
05:04 rouven joined #gluster
05:09 rouven joined #gluster
05:19 rouven joined #gluster
05:23 Prasad joined #gluster
05:24 ndarshan joined #gluster
05:24 rouven joined #gluster
05:25 bitchecker joined #gluster
05:26 sahina joined #gluster
05:31 ompragash_ joined #gluster
05:34 armyriad joined #gluster
05:41 ompragash__ joined #gluster
05:42 ompragash__ joined #gluster
05:42 Humble joined #gluster
05:43 jiffin joined #gluster
05:45 Saravanakmr joined #gluster
05:46 ppai joined #gluster
05:47 nishanth joined #gluster
05:49 kotreshhr joined #gluster
05:49 rouven joined #gluster
05:50 karthik_us joined #gluster
05:52 msvbhat joined #gluster
05:52 rastar joined #gluster
05:54 rouven joined #gluster
05:56 sahina joined #gluster
05:58 skumar joined #gluster
05:58 Prasad_ joined #gluster
06:00 jiffin joined #gluster
06:02 Prasad__ joined #gluster
06:04 sunny joined #gluster
06:10 sage__ joined #gluster
06:17 aravindavk joined #gluster
06:17 ppai joined #gluster
06:23 sahina joined #gluster
06:24 apandey joined #gluster
06:30 skumar_ joined #gluster
06:34 xavih joined #gluster
06:36 rastar joined #gluster
06:40 karthik_us joined #gluster
06:41 itisravi joined #gluster
06:45 kdhananjay joined #gluster
06:55 sunny joined #gluster
07:03 jkroon joined #gluster
07:08 sanoj joined #gluster
07:14 rouven joined #gluster
07:17 poornima_ joined #gluster
07:19 rouven joined #gluster
07:19 BitByteNybble110 joined #gluster
07:22 lkthomas joined #gluster
07:22 lkthomas hey all
07:22 lkthomas how often does split brain happen on replica ?
07:44 sahina if you're using replica 3, split-brain should not happen. Are you facing an issue?
07:57 jason_ joined #gluster
08:03 Guest79955 hello,I got confusion about distributed replication volume, can anyone help advise? Thanks. Q: will scaling out improve linear performance(both write/read) to  Distributed Replication volume?
08:04 rouven joined #gluster
08:07 jason-ma joined #gluster
08:10 jason-ma hello,can anyone adivse - will scaling out improve linear performance(especially for writing) to  Distributed Replication?Thanks.
08:14 rouven joined #gluster
08:17 [diablo] joined #gluster
08:19 jason-ma hello,can anyone adivse - will scaling out improve linear performance(especially for writing) to  Distributed Replication volume?Thanks.
08:23 ivan_rossi joined #gluster
08:24 fsimonce joined #gluster
08:25 jones joined #gluster
08:27 jri joined #gluster
08:28 jri joined #gluster
08:28 Guest92864 nobody talking
08:54 kramdoss_ joined #gluster
08:54 rouven joined #gluster
08:54 buvanesh_kumar joined #gluster
08:55 itisravi jason-ma: It will help in the sense that some of the files will be placed in the newly added bricks and the I/O on the files will now go to these bricks.
08:59 rouven joined #gluster
09:08 sanoj joined #gluster
09:09 jri joined #gluster
09:12 jason-ma itisravi: ok,what about distributed stripped replication volume? the scaling out should help a lot on this type of volume,right ? and which type of volume is good for production environment,is distribed +stipped+replication mature enough in latest version?
09:12 buvanesh_kumar joined #gluster
09:14 itisravi jason-ma: stripe is deprecated. Sharding would help but as of now, sharding is recommended for single writer use cases only.
09:15 itisravi ie. typically for hosting and running VMs on gluster volumes.
09:16 shyam joined #gluster
09:20 buvanesh_kumar joined #gluster
09:21 jason-ma so if not considering VM environment, only distributed replication and distributed EC can be choosed for production environment,but EC will Sacrifice performance
09:22 jason-ma we are considering Gluster for Kubernetes Persistent volume,seems distributed replicaiton volume is the choice...
09:24 kramdoss_ joined #gluster
09:29 rouven joined #gluster
09:30 buvanesh_kumar joined #gluster
09:31 itisravi rastar would be a better person to give some insights.
09:32 rastar itisravi: jason-ma: yes, default to use is distributed replication volume
09:32 rastar jason-ma: whether you go for EC or not depends largely on your workload and need for space efficiency
09:34 rouven joined #gluster
09:34 rastar jason-ma: when a pod uses kubernetes PV backed by gluster then some CPU resources on the node is used by gluster too
09:35 rastar jason-ma: a EC volume would use slightly more CPU on <pod node> more than a distributed-replica volume.
09:35 skumar_ joined #gluster
09:36 jason-ma understand, EC need more resource for calculating
09:37 jason-ma so although there is still "strip" related section in the lastest document,but it will be deprecated in the future..
09:39 jason-ma thanks itisravi and rastar ~
10:02 jiffin joined #gluster
10:02 MrAbaddon joined #gluster
10:06 ndarshan joined #gluster
10:14 rouven joined #gluster
10:18 ppai joined #gluster
10:19 rouven joined #gluster
10:29 rouven joined #gluster
10:30 atinm joined #gluster
10:34 rouven joined #gluster
10:38 bipul joined #gluster
10:43 gyadav_ joined #gluster
10:44 rouven joined #gluster
10:49 rouven joined #gluster
10:59 rouven joined #gluster
11:03 aravindavk joined #gluster
11:04 rouven joined #gluster
11:21 atinm joined #gluster
11:34 rouven joined #gluster
11:39 rouven joined #gluster
11:43 brayo joined #gluster
11:45 ppai joined #gluster
11:46 Rakkin_ joined #gluster
11:54 rouven joined #gluster
11:59 rouven joined #gluster
12:00 msvbhat joined #gluster
12:16 gyadav_ joined #gluster
12:17 kettlewell joined #gluster
12:19 guhcampos joined #gluster
12:23 guhcampos joined #gluster
12:56 guhcampos joined #gluster
12:59 Rakkin_ joined #gluster
12:59 rouven joined #gluster
13:04 rouven joined #gluster
13:15 jri joined #gluster
13:23 Rakkin_ joined #gluster
14:13 plarsen joined #gluster
14:14 msvbhat joined #gluster
14:16 jri joined #gluster
14:25 skylar1 joined #gluster
14:28 plarsen joined #gluster
14:29 gyadav_ joined #gluster
14:44 sunny joined #gluster
14:48 gyadav_ joined #gluster
14:49 rouven joined #gluster
14:54 rouven joined #gluster
15:02 agustafson joined #gluster
15:09 rouven joined #gluster
15:14 rouven joined #gluster
15:17 msvbhat joined #gluster
15:17 jri joined #gluster
15:18 jri joined #gluster
15:18 kotreshhr left #gluster
15:20 agustafson Hey all I'm trying to to troubleshoot some issues where clients randomly become very slow (running ls takes ~1-4 minutes on a directory that used to be instant). This issue does not happen on all clients at once and sometimes goes away if I remount the volume a few times. Furthermore I can't reproduce this every time I run ls before remounting. This is after an upgrade from 3.8 to 3.10.8-1.el7. I originally went to 3.12 but saw CPU spike heavily
15:20 agustafson and reverted to 3.10 a few hours later. I keep seeing remote operation failed No such device or address in the brick's log and client log. This is setup as just two bricks replicated between two gluster servers. Any pointers would be great
15:20 agustafson Example log message: The message "W [MSGID: 114031] [client-rpc-fops.c:2151:client3_3_seek_cbk] 0-gluster-ssd-volume-client-0: remote operation failed [No such device or address]" repeated 80 times between [2017-12-15 14:40:01.497697] and [2017-12-15 14:41:06.892948]
15:21 agustafson [2017-12-15 14:41:32.106927] E [MSGID: 115089] [server-rpc-fops.c:2070:server_seek_cbk] 0-gluster-ssd-volume-server: 6874284: SEEK-2 (04315968-918c-4475-bef1-3b64398dc447), client: jenkins.example.com-3869-2017/12/15-03:41:39:3955-gluster-ssd-volume-client-0-0-0, error-xlator: gluster-ssd-volume-posix [No such device or address
15:22 agustafson [2017-12-15 14:41:32.106871] E [MSGID: 113107] [posix.c:1111:posix_seek] 0-gluster-ssd-volume-posix: seek failed on fd 962 length 429 [No such device or address]
15:29 susant joined #gluster
15:32 msvbhat joined #gluster
15:49 rouven joined #gluster
15:52 rwheeler joined #gluster
15:54 rouven joined #gluster
15:54 ompragash joined #gluster
16:15 rouven joined #gluster
16:18 jri joined #gluster
16:19 rouven joined #gluster
16:26 jstrunk joined #gluster
16:33 msvbhat joined #gluster
16:47 bennyturns joined #gluster
16:48 rouven joined #gluster
16:50 bennyturns joined #gluster
16:53 rouven joined #gluster
16:56 bipul joined #gluster
16:57 jiffin joined #gluster
16:59 rouven joined #gluster
17:05 rouven joined #gluster
17:11 kusznir_ joined #gluster
17:19 jri joined #gluster
17:24 Rakkin_ joined #gluster
17:25 major joined #gluster
17:28 Intensity joined #gluster
17:29 rouven joined #gluster
17:32 buvanesh_kumar joined #gluster
17:34 rouven joined #gluster
17:39 cholcombe jason-ma: i'm working on a repo that might help you
17:41 kpease joined #gluster
17:44 jri joined #gluster
17:48 ivan_rossi left #gluster
17:51 msvbhat joined #gluster
18:07 DV joined #gluster
18:28 bennyturns joined #gluster
18:34 rouven joined #gluster
18:39 rouven joined #gluster
18:52 phlogistonjohn joined #gluster
18:55 xavih joined #gluster
19:04 rouven joined #gluster
19:07 ron-slc joined #gluster
19:16 stoatwblr joined #gluster
19:19 rouven joined #gluster
19:37 stoatwblr is there any chance of updating the nfs-ganesha-2.5 ppa to 2.5.4? I'm running into a udp error that the ganesha guys think is due to a bug they introduced and then fixed.
19:39 rouven joined #gluster
19:44 rouven joined #gluster
19:44 MrAbaddon joined #gluster
19:53 msvbhat joined #gluster
19:58 skylar1 joined #gluster
20:30 msvbhat joined #gluster
20:44 rouven joined #gluster
20:51 Gambit15 joined #gluster
20:59 rouven joined #gluster
21:09 jri joined #gluster
22:30 kpease joined #gluster
22:39 jri joined #gluster
23:56 guhcampos joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary