Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 squeakyneb I've got two test nodes running replication, I pulled power on one of them while writing a large file to test robustness, now the rebooted node sees the other node and thinks everything is fine, but the one that stayed online is convinced that the one I forcefully rebooted isn't connected.
00:08 squeakyneb gluster peer status on node 1 says that 2 is connected. Node 2 says 1 is "in cluster (disconnected)
00:10 squeakyneb volume status on node 1 says that bricks on 1 and 2 are online, node 2 is only showing the brick on node 2 though
00:11 luis_silva joined #gluster
00:16 JoeJulian squeakyneb: check your firewall
00:19 calavera joined #gluster
00:20 squeakyneb JoeJulian: ah, yeah. I stopped it for testing and reboting obviously brought it back up. I've killed it and restarted gluster but it's still not working properly 0.o
00:59 dlambrig joined #gluster
01:02 woakes070048 joined #gluster
01:19 julim joined #gluster
01:35 Lee1092 joined #gluster
01:51 harish_ joined #gluster
01:59 EinstCrazy joined #gluster
02:16 neha_ joined #gluster
02:29 nangthang joined #gluster
02:46 overclk joined #gluster
02:50 haomaiwa_ joined #gluster
02:51 haomaiwang joined #gluster
02:57 bharata-rao joined #gluster
03:01 haomaiwa_ joined #gluster
03:14 EinstCrazy joined #gluster
03:42 kdhananjay joined #gluster
03:43 [7] joined #gluster
03:49 sakshi joined #gluster
03:51 armyriad joined #gluster
03:57 shubhendu__ joined #gluster
03:58 RameshN_ joined #gluster
04:01 haomaiwa_ joined #gluster
04:07 kanagaraj joined #gluster
04:10 squeakyneb so I am correct in thinking that the only way to make the daemons use an internal backlink is specifying a different address for the hostnames in /etc/hosts?
04:20 rjoseph joined #gluster
04:21 yazhini joined #gluster
04:23 hchiramm_home joined #gluster
04:30 raghug joined #gluster
04:33 nbalacha joined #gluster
04:35 overclk joined #gluster
04:36 RameshN_ joined #gluster
04:37 gem joined #gluster
04:37 dusmant joined #gluster
04:38 nbalacha joined #gluster
04:40 vimal joined #gluster
04:40 skoduri joined #gluster
04:44 ppai joined #gluster
04:45 pppp joined #gluster
04:53 maveric_amitc_ joined #gluster
04:56 ndarshan joined #gluster
04:57 gildub joined #gluster
05:01 auzty joined #gluster
05:01 haomaiwa_ joined #gluster
05:06 hchiramm joined #gluster
05:08 GB21 joined #gluster
05:11 hgowtham joined #gluster
05:12 Manikandan joined #gluster
05:14 TvL2386 joined #gluster
05:17 ashiq joined #gluster
05:23 harish_ joined #gluster
05:27 Bhaskarakiran joined #gluster
05:31 kotreshhr joined #gluster
05:36 vmallika joined #gluster
05:39 neha_ joined #gluster
05:39 rjoseph joined #gluster
05:47 Bhaskarakiran joined #gluster
05:51 LebedevRI joined #gluster
05:53 gildub joined #gluster
05:54 haomaiwa_ joined #gluster
05:54 nbalacha joined #gluster
05:58 hagarth joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 poornimag joined #gluster
06:04 kdhananjay joined #gluster
06:04 kdhananjay joined #gluster
06:05 Leildin joined #gluster
06:08 jwaibel joined #gluster
06:16 atalur joined #gluster
06:19 kshlm joined #gluster
06:21 mhulsman joined #gluster
06:23 jtux joined #gluster
06:28 skoduri joined #gluster
06:32 nbalacha joined #gluster
06:40 kshlm joined #gluster
06:45 raghug joined #gluster
06:45 HemanthaSKota joined #gluster
06:48 spalai joined #gluster
06:49 kshlm joined #gluster
06:51 anil joined #gluster
06:52 nangthang joined #gluster
07:01 haomaiwa_ joined #gluster
07:01 jwd_ joined #gluster
07:05 ramky joined #gluster
07:12 DV joined #gluster
07:19 mbukatov joined #gluster
07:37 Jampy joined #gluster
07:42 raghug joined #gluster
07:43 Jampy Hi there! :) I'm using gluster as shared storage in a 3-node Proxmox HA cluster (via NFS layer). After rebooting one of the nodes, Gluster is healing, and access to some 200GB files results in I/O errors. Before rebooting "heal info" showed *no* entries. Gluster is currently still healing and I'm confident that everything will be working a gain once it finishes - however the VM can't start in the meantime. It's not the first time this happens. Whats wrong
07:47 ctria joined #gluster
07:48 Jampy as expected, heal completed successfully and VM is up and running without any problems
07:51 samppah Jampy: what version of GlusterFS you are using?
07:51 struck joined #gluster
07:54 Jampy oh, sorry, it's 3.5.2 (the current version in the Proxmox repository)
07:55 [Enrico] joined #gluster
07:57 struck Hi all, having an issue with my gluster setup. Feels like I'm missing something. When ever a node goes down (clean or not) and comes back up the self healing starts which is nice. But then the machines (qcow2 disks) are stuck in read-only mode and not possible to use. Is there a setting that could prevent them to got to RO and keep RW? Understand that there will be more data sent during the self healing period but that is fine. Run
07:57 Saravana_ joined #gluster
08:01 haomaiwa_ joined #gluster
08:02 samppah struck: Jampy is having a similar issue. I'm not 100% sure but I think that this issue might have been fixed in newer version of Gluster
08:03 Jampy struck: what version are you using?
08:03 Jampy samppah: indeed. is there a bug report or something so I can learn more about it?
08:04 struck Version 3.6.2-2 which is the latest version I can find for debian wheezy (proxmox)
08:04 [Enrico] joined #gluster
08:04 struck Do you know if this bug maybe doesn't exist in older versions?
08:05 Jampy struck: yeah, sounds like my problem - although I'm using 3.5.2 and all my Proxmox (enterprise) packages are up-to-date-
08:05 struck hmm ok
08:07 samppah IIRC I hit this issue when I upgraded from 3.4 to 3.5
08:13 harish_ joined #gluster
08:13 samppah Jampy: sorry, I can't find bug report :(
08:15 struck found version 3.6.6, will update and test again
08:15 struck samppah, you think it is in even later version (3.7+)?
08:18 Jampy samppah: where do you find those packages?
08:19 struck deb http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/Debian/wheezy/apt
08:19 glusterbot Title: Index of /pub/gluster/glusterfs/3.6/LATEST/Debian/wheezy/apt (at download.gluster.org)
08:19 Jampy sorry, I meant struck ;)
08:19 struck np
08:19 Jampy thanks
08:24 So4ring_ joined #gluster
08:25 arcolife joined #gluster
08:27 Jampy struck: when you have some results, could you please keep me informed (udo.giacomozzi@indunet.it)?
08:27 struck Jampy: sure, waiting for a self heal to complete atm, so later today probably
08:28 Jampy great, thanks
08:31 Jampy can I run Gluster 3.5 on some nodes and Gluster 3.6 on other nodes at the same time without risking anything?
08:32 Pupeno joined #gluster
08:37 haomaiwa_ joined #gluster
08:39 atalur joined #gluster
08:53 kovshenin joined #gluster
08:53 mhulsman joined #gluster
08:54 Slashman joined #gluster
09:01 17SADPPR7 joined #gluster
09:03 rraja joined #gluster
09:03 GB21 joined #gluster
09:05 sakshi joined #gluster
09:06 Pupeno joined #gluster
09:06 hagarth joined #gluster
09:09 karnan joined #gluster
09:11 deepakcs joined #gluster
09:17 raghu joined #gluster
09:20 hchiramm joined #gluster
09:24 mhulsman1 joined #gluster
09:36 vmallika joined #gluster
09:38 spalai left #gluster
09:39 Trefex joined #gluster
09:39 mator joined #gluster
09:39 mator http://www.gluster.org/pipermail/gluster-users/2015-October/023791.html
09:39 glusterbot Title: [Gluster-users] [IMPORTANT, PLEASE READ] replace-brick problem with all releases till now (at www.gluster.org)
09:51 ctria joined #gluster
09:52 haomaiwa_ joined #gluster
09:58 RedW joined #gluster
10:01 haomaiwa_ joined #gluster
10:03 aravindavk joined #gluster
10:04 kovshenin joined #gluster
10:16 aravindavk joined #gluster
10:18 aravindavk joined #gluster
10:20 aravindavk joined #gluster
10:23 aravindavk joined #gluster
10:29 spalai joined #gluster
10:36 gem joined #gluster
10:38 ron-slc joined #gluster
10:46 Bhaskarakiran joined #gluster
10:54 mhulsman joined #gluster
10:57 Trefex joined #gluster
11:00 jamesc joined #gluster
11:01 jamesc can I create a gluster pair from a cloned machine with all the initial data in it.
11:01 haomaiwa_ joined #gluster
11:03 Bhaskarakiran joined #gluster
11:04 aravindavk joined #gluster
11:07 overclk joined #gluster
11:11 aravindavk joined #gluster
11:14 bluenemo joined #gluster
11:23 Bhaskarakiran joined #gluster
11:23 neofob joined #gluster
11:31 ppai joined #gluster
11:46 nbalacha joined #gluster
11:46 Bhaskarakiran joined #gluster
11:48 ssarah joined #gluster
11:51 ssarah Hei guys when I did this to test my setup on one of my bricks: "sudo mount -t glusterfs brick1:/gv0 /mount" it took a really long while. Any idea why?
11:55 hchiramm joined #gluster
11:57 crashmag joined #gluster
11:58 nbalacha joined #gluster
12:01 haomaiwa_ joined #gluster
12:03 raghug joined #gluster
12:04 glusterbot joined #gluster
12:12 ppai joined #gluster
12:13 unclemarc joined #gluster
12:14 hchiramm joined #gluster
12:16 shubhendu_ joined #gluster
12:19 haomaiwa_ joined #gluster
12:19 poornimag joined #gluster
12:19 jtux joined #gluster
12:24 neha_ joined #gluster
12:27 sblanton joined #gluster
12:31 shubhendu__ joined #gluster
12:32 sblanton all my brick logs are spinning on: Permission denied occurred while creating symlinks
12:32 sblanton I'm on 3.5.6...any leads?  file system somewhere went to read-only?
12:34 sblanton this is combined with other messages... like -marker: No data available occurred while creating symlinks
12:39 firemanxbr joined #gluster
12:49 mpietersen joined #gluster
12:51 mpietersen joined #gluster
12:52 julim joined #gluster
12:55 ssarah I think I found a solution to my problem. Diffferent firewall ports used for >3.4
13:01 haomaiwa_ joined #gluster
13:06 atinm joined #gluster
13:06 GB21 joined #gluster
13:06 GB21_ joined #gluster
13:10 shaunm joined #gluster
13:17 clutchk joined #gluster
13:24 anil joined #gluster
13:29 skylar joined #gluster
13:34 spcmastertim joined #gluster
13:36 shubhendu__ joined #gluster
13:38 aravindavk joined #gluster
13:40 ssarah joined #gluster
13:40 jwaibel joined #gluster
13:42 plarsen joined #gluster
13:44 dgandhi joined #gluster
13:46 ajneil joined #gluster
13:47 ajneil I should be able to find this in the docs somewhere but I have failed
13:47 ajneil is there a way to define a client as trusted so root_squash can be disabled?
13:48 harold joined #gluster
13:48 afics joined #gluster
13:56 struck Jampy: still there? no luck
13:56 jwaibel joined #gluster
13:56 mpietersen does anyone have experience with a geo-replicated volume failing to sync?
13:56 mpietersen my last synced date was from the 28th
13:57 mpietersen it's not in a failed state, however the slave brick has stopped growing as I copy data over
14:01 haomaiwa_ joined #gluster
14:02 nishanth joined #gluster
14:05 _maserati joined #gluster
14:05 EinstCrazy joined #gluster
14:13 shyam joined #gluster
14:14 plarsen joined #gluster
14:17 nbalacha joined #gluster
14:19 sblanton I have self-heal going on and it appears to have significantly degraded client performance.
14:19 sblanton How do I turn off self-heal?
14:21 sblanton Also, even though all my clients are 3.5.x, I get this message that prevents me from making changes:
14:21 sblanton volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
14:22 sblanton this was for a parameter that existed since 3.2: sudo gluster vol set garchive-1 cluster.data-self-heal-algorithm diff
14:23 _maserati_ joined #gluster
14:25 sblanton ok, I triggered a full heal - anyway to stop that?
14:33 struck power down the node, change settings and power up again?
14:35 sblanton thanks
14:35 Bhaskarakiran joined #gluster
14:41 bowhunter joined #gluster
15:00 haomaiwa_ joined #gluster
15:01 haomaiwang joined #gluster
15:01 EinstCrazy joined #gluster
15:05 wolsen joined #gluster
15:18 cholcombe joined #gluster
15:26 jwd_ joined #gluster
15:45 raghug joined #gluster
15:50 So4ring joined #gluster
15:56 _maserati Anyone got a good link for debugging why my geo-replication is "Faulty" on all nodes and how to get rid of it? Or just feel like helping me through it
15:57 So4ring joined #gluster
16:09 _maserati @geo-replication faulty
16:10 _maserati @geo faulty
16:10 _maserati @geo
16:10 glusterbot _maserati: I do not know about 'geo', but I do know about these similar topics: 'geo-replication'
16:10 pdrakewe_ joined #gluster
16:11 Jampy struck: still same healing problem with version 3.6.6 ? :(
16:27 cliluw joined #gluster
16:30 overclk joined #gluster
16:34 Rapture joined #gluster
16:37 dlambrig joined #gluster
16:41 muneerse joined #gluster
16:47 krink joined #gluster
16:47 hagarth joined #gluster
16:49 F2Knight joined #gluster
16:49 atinm joined #gluster
16:51 luis_silva joined #gluster
16:56 jamesc joined #gluster
17:01 aravindavk joined #gluster
17:14 mhulsman joined #gluster
17:14 jwaibel joined #gluster
17:16 overclk joined #gluster
17:21 ajneil is there a way to define a client as trusted so root_squash can be disabled?  I should be able to find this in the docs but no luck so far
17:27 vimal joined #gluster
17:32 ajneil from my research it seems that there is no way to disable root-squashing on a client by client basis.  Can anyone confirm?
17:37 dlambrig joined #gluster
17:44 squizzi_ joined #gluster
17:49 hchiramm_home joined #gluster
17:58 JoeJulian ajneil: correct
17:58 JoeJulian unless that client is also a peer in that trusted pool.
18:09 atinm joined #gluster
18:18 arcolife joined #gluster
18:20 ajneil JoeJulian: any plans in the works to change this behaviour?
18:31 JoeJulian ajneil: Not that I'm aware of. Feel free to file a bug report if you see a need that's not being met.
18:31 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:35 ajneil OK thanks
18:46 dlambrig joined #gluster
18:49 beeradb_ joined #gluster
18:51 jbrooks joined #gluster
18:56 shyam joined #gluster
18:57 jwaibel joined #gluster
19:33 julim joined #gluster
19:50 David_Vargese joined #gluster
19:51 So4ring joined #gluster
20:05 dlambrig joined #gluster
20:07 plarsen joined #gluster
20:12 So4ring joined #gluster
20:16 squizzi joined #gluster
20:32 shyam joined #gluster
20:35 Rapture joined #gluster
20:47 Rapture joined #gluster
21:00 gildub joined #gluster
21:00 jwaibel joined #gluster
21:18 dlambrig joined #gluster
21:41 marlinc joined #gluster
22:21 neofob joined #gluster
22:46 beeradb_ joined #gluster
22:48 plarsen joined #gluster
22:52 neofob left #gluster
22:58 jrdn joined #gluster
23:44 dlambrig joined #gluster
23:59 dlambrig joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary