Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2018-01-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 ahino joined #gluster
00:57 vbellur joined #gluster
02:05 MrAbaddon joined #gluster
02:11 atinm_ joined #gluster
02:30 psony|afk joined #gluster
02:32 susant joined #gluster
02:34 susant joined #gluster
02:56 kotreshhr joined #gluster
02:58 ilbot3 joined #gluster
02:58 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:08 major joined #gluster
03:14 jiffin joined #gluster
03:19 ndarshan joined #gluster
03:22 hgowtham joined #gluster
03:27 Vishnu_ joined #gluster
03:39 MrAbaddon joined #gluster
03:43 hgowtham joined #gluster
03:47 bootc joined #gluster
03:50 Vishnu__ joined #gluster
03:54 aravindavk joined #gluster
03:58 mbukatov joined #gluster
04:21 kdhananjay joined #gluster
04:24 kdhananjay joined #gluster
04:25 nbalacha joined #gluster
04:31 kdhananjay joined #gluster
04:38 poornima joined #gluster
04:38 Prasad joined #gluster
04:50 daMaestro joined #gluster
04:54 kramdoss_ joined #gluster
04:58 poornima joined #gluster
05:28 daMaestro joined #gluster
05:28 ahino joined #gluster
05:39 ahino joined #gluster
05:43 psony|afk joined #gluster
05:44 rafi1 joined #gluster
05:47 Rakkin joined #gluster
05:49 jkroon joined #gluster
05:54 nbalacha joined #gluster
05:56 ahino1 joined #gluster
06:09 Saravanakmr joined #gluster
06:23 karthik_us joined #gluster
06:31 xavih joined #gluster
06:35 msvbhat joined #gluster
06:36 [diablo] joined #gluster
06:40 logan- joined #gluster
06:44 kramdoss_ joined #gluster
06:48 nbalacha joined #gluster
06:53 Saravanakmr joined #gluster
06:54 rafi joined #gluster
07:02 poornima_ joined #gluster
07:13 apandey joined #gluster
07:17 sadbox joined #gluster
07:20 mlg9000 joined #gluster
07:25 rafi1 joined #gluster
07:30 apandey_ joined #gluster
07:31 Vishnu_ joined #gluster
07:33 Vishnu__ joined #gluster
07:46 kotreshhr joined #gluster
07:53 Vishnu_ joined #gluster
07:54 Vishnu__ joined #gluster
07:55 sunny joined #gluster
07:56 Humble joined #gluster
08:02 nbalacha joined #gluster
08:07 karthik_us joined #gluster
08:08 Prasad joined #gluster
08:08 ndarshan joined #gluster
08:08 poornima_ joined #gluster
08:12 jri joined #gluster
08:14 mbukatov joined #gluster
08:19 rideh joined #gluster
08:20 skumar joined #gluster
08:28 Prasad joined #gluster
08:42 nh2[m] joined #gluster
08:47 Vishnu_ joined #gluster
08:49 kdhananjay joined #gluster
09:07 kdhananjay joined #gluster
09:10 Vishnu_ joined #gluster
09:22 poornima_ joined #gluster
09:22 eMBee joined #gluster
09:28 buvanesh_kumar joined #gluster
09:32 eMBee so i spent half an hour or so to find documentation that explains how to add two additional nodes to a 2-node replicated volume, to turn the 4-node cluster into a distributed-replicated volume.
09:32 eMBee then i came accross this statement "As this raises the node-count to 2x, Gluster is smart enough to notice you want a 'distributed-replicated-striped' setup", which seems to imply that moving from replicated to distributed-replicated by adding nodes is the default operation, and that is why there is no specific documentation for this case
09:33 eMBee so is that correct? if i have a 2-node replicated volumen, adding two additional nodes will automatically create a distributed-replicated volume of 4 nodes?
09:34 anoopcs joined #gluster
09:35 rwheeler joined #gluster
09:37 Shu6h3ndu joined #gluster
09:37 msvbhat joined #gluster
09:42 msvbhat_ joined #gluster
09:44 poornima_ joined #gluster
09:49 kotreshhr left #gluster
09:53 Prasad_ joined #gluster
10:05 misc joined #gluster
10:08 Prasad joined #gluster
10:10 msvbhat joined #gluster
10:13 Humble joined #gluster
10:19 susant1 joined #gluster
10:20 kdhananjay1 joined #gluster
10:31 MrAbaddon joined #gluster
10:32 major joined #gluster
10:32 nbalacha|afk joined #gluster
10:34 mlg9000 joined #gluster
10:38 itisravi joined #gluster
10:49 skumar joined #gluster
10:56 karthik_us joined #gluster
11:10 ThHirsch joined #gluster
11:20 rafi1 joined #gluster
11:26 ndevos eMBee: yes, add two nodes to the cluster (peer probe) and then you can add more bricks to the volume
11:27 ndevos eMBee: http://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#expanding-volumes
11:27 glusterbot Title: Managing Volumes - Gluster Docs (at docs.gluster.org)
11:28 ndevos eMBee: in your case, you will need to run 'gluster volume add-brick <1st-brick> <2nd-brick>' with two bricks
11:28 ndevos s/volume add-brick/volume add-brick <volume>/
11:28 glusterbot What ndevos meant to say was: eMBee: in your case, you will need to run 'gluster volume add-brick <volume> <1st-brick> <2nd-brick>' with two bricks
11:58 rouven joined #gluster
12:14 Humble joined #gluster
12:20 kettlewell joined #gluster
12:25 jstrunk_ joined #gluster
12:27 renout joined #gluster
12:28 Ramereth joined #gluster
12:28 cliluw joined #gluster
12:28 Kassandry joined #gluster
12:28 atrius joined #gluster
12:29 nirokato joined #gluster
12:29 DJClean joined #gluster
12:29 samikshan joined #gluster
12:29 e1z0 joined #gluster
12:29 mlhess joined #gluster
12:32 decayofmind joined #gluster
12:32 valkyr3e joined #gluster
12:36 sadbox joined #gluster
12:51 jiffin joined #gluster
13:07 jkroon joined #gluster
13:10 hvisage joined #gluster
13:11 susant joined #gluster
13:12 phlogistonjohn joined #gluster
13:26 msvbhat joined #gluster
14:18 shyam joined #gluster
14:23 shyam joined #gluster
14:26 rouven hey guys, how do i set selinux support for a gluster 3.12 volume?
14:31 rouven should be supported according to https://bugzilla.redhat.com/show_bug.cgi?id=1318100
14:31 glusterbot Bug 1318100: medium, medium, ---, manikandancs333, CLOSED CURRENTRELEASE, RFE : SELinux translator to support setting SELinux contexts on files in a glusterfs volume
14:46 phlogistonjohn joined #gluster
14:50 Humble joined #gluster
14:56 skylar1 joined #gluster
15:07 rouven left #gluster
15:19 decayofmind joined #gluster
15:21 shyam joined #gluster
15:25 jiffin joined #gluster
15:41 level7 joined #gluster
15:48 jiffin joined #gluster
15:59 pioto joined #gluster
16:04 mrcirca_ocean joined #gluster
16:05 mrcirca_ocean Is there someone who does monitor gluster storage with zabbix
16:05 mrcirca_ocean ?
16:12 mrcirca_ocean https://github.com/MrCirca/zabbix-glusterfs
16:12 glusterbot Title: GitHub - MrCirca/zabbix-glusterfs: Monitoring GlusterFS storage with zabbix 3.4 (at github.com)
16:13 waqstar joined #gluster
16:13 TBlaar joined #gluster
16:13 jiffin joined #gluster
16:14 eMBee ndevos thank you! after this revelation, looking at the original docs was my next goal. i don't know why i didn't check there first. i guess kind of expected to find the right docs through a search...
16:17 psony|afk joined #gluster
16:18 kpease joined #gluster
16:20 Gambit15 joined #gluster
16:20 shyam joined #gluster
16:41 sunny joined #gluster
17:03 jiffin1 joined #gluster
17:04 Guest83 joined #gluster
17:06 s34n joined #gluster
17:07 CyrilP Hi there, I have a weird issue with an old gluster version (3.6.8). I have a 5 node setup with several kinds of volumes, on one node, the glusterfs process holding the nfs connexion to that volume is growing in memory until the memory is full. This is odd because this node have twice less nfs clients than the other nodes...
17:07 amye joined #gluster
17:08 CyrilP so I'm wondering what is wired to that memory
17:09 CyrilP on other nodes (~300 clients each) it only take <10GB of memory, but on that node (50 nfs clients), it can go up to 50GB...
17:10 Humble joined #gluster
17:10 CyrilP 1/ Is glusterfs process holding nfs is handling SIGHUP (I mean can I mitigate this issue with a sighup without a big disruption of service)
17:10 CyrilP 2/ What the heck is going on :) (this issue is fairly new)
17:17 sunnyk joined #gluster
17:22 CyrilP any input would be greatly apreciated
17:27 shyam joined #gluster
17:41 pioto joined #gluster
17:58 pocketprotector joined #gluster
18:46 kkeithley CryilP: memory leak.  Upgrade or resign yourself to restarting the volume periodically.
18:55 jiffin joined #gluster
19:03 CyrilP so that's a know issue?
19:04 CyrilP @kkeithley weird that it behave fine for 2y and just started to misbehave 4 weeks ago
19:04 CyrilP is a SIGHUP on the offending process better than stop / start the entire vol ?
19:04 CyrilP because I have a tons of clients that are not happy when I do that
19:05 kkeithley AFAIK a HUP isn't going to do anything.
19:06 CyrilP dman
19:06 CyrilP kill -9 on nfs process then glusterd restart maybe
19:06 kkeithley memleaks have been around for a long time. They've been getting more attention (more awareness) over the last three or four releases.
19:07 CyrilP that's very odd I have some nodes with 1y uptime with lot of client an no leak
19:07 kkeithley you can kill -9 the glusterfs/nfs process and restart it manually.  AFAIK restarting glusterd isn't going to restart the glusterfs/nfs process
19:07 kkeithley it depends greatly on what the workload is.
19:08 kkeithley In the longevity cluster that I have here I see modest growth. Some people see serious growth and OOM kills. Others see next to nothing.
19:08 CyrilP well mixed workload :p mostly homedirectory with some qcow2 snapshots laying around (even if most of qcow are accessed though gfapi, there is still some hyp that are accessing the through nfs)
19:09 CyrilP on the other hand I'm qualifying 3.12 with ganesha, but the migration will not be that easy
19:30 [diablo] joined #gluster
19:31 hvisage joined #gluster
19:42 Vapez joined #gluster
20:01 kpease joined #gluster
20:12 ThHirsch joined #gluster
20:21 investigator_ joined #gluster
20:37 ACiDGRiM joined #gluster
20:41 pladd joined #gluster
20:46 plarsen joined #gluster
20:49 mallorn Looking at the 3.13.1 RPMs, it looks like glusterfs-server-3.13.1-1.el7.x86_64.rpm has the dependency on liburcu-bp.so.1, whereas 3.13.0 depends on liburcu-bp.so.6.
20:55 MrAbaddon joined #gluster
21:33 melliott joined #gluster
21:50 phlogistonjohn joined #gluster
23:19 shyam joined #gluster
23:48 mlg9000 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary