Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 Alghost joined #gluster
00:17 mattmcc joined #gluster
01:23 Lee1092 joined #gluster
01:41 haomaiwang joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:21 plarsen joined #gluster
02:30 amye joined #gluster
02:32 haomaiwang joined #gluster
02:59 wiza joined #gluster
02:59 aravindavk joined #gluster
03:11 Kins joined #gluster
03:14 swebb joined #gluster
03:24 harish joined #gluster
03:24 arcolife joined #gluster
03:36 atinm joined #gluster
03:39 nishanth joined #gluster
03:49 haomaiwang joined #gluster
03:56 PaulCuzner joined #gluster
03:57 RameshN joined #gluster
03:59 shubhendu joined #gluster
04:09 itisravi joined #gluster
04:11 nehar joined #gluster
04:16 gem joined #gluster
04:28 nbalacha joined #gluster
04:33 harish_ joined #gluster
04:33 ramky joined #gluster
04:35 d-fence joined #gluster
04:35 d-fence_ joined #gluster
04:42 aspandey joined #gluster
04:48 [o__o] joined #gluster
04:51 nbalacha joined #gluster
04:52 rastar joined #gluster
04:57 prasanth joined #gluster
05:01 karthik___ joined #gluster
05:11 hgowtham joined #gluster
05:17 ndarshan joined #gluster
05:18 sakshi joined #gluster
05:21 aravindavk joined #gluster
05:23 ppai joined #gluster
05:25 [o__o] joined #gluster
05:26 raghug joined #gluster
05:30 Apeksha joined #gluster
05:35 skoduri joined #gluster
05:38 jww joined #gluster
05:38 nehar_ joined #gluster
05:41 aspandey joined #gluster
05:47 satya4ever_ joined #gluster
05:50 atalur joined #gluster
05:52 rafi joined #gluster
05:55 jiffin joined #gluster
05:57 kshlm joined #gluster
05:57 jww joined #gluster
06:01 msvbhat_ joined #gluster
06:01 karnan joined #gluster
06:08 Humble joined #gluster
06:15 spalai joined #gluster
06:19 anil_ joined #gluster
06:19 jtux joined #gluster
06:23 gowtham joined #gluster
06:32 poornimag joined #gluster
06:36 skoduri joined #gluster
06:40 karnan joined #gluster
06:44 atalur joined #gluster
06:53 Humble joined #gluster
06:55 ndarshan joined #gluster
07:04 ashiq joined #gluster
07:11 jri joined #gluster
07:11 pur_ joined #gluster
07:12 Manikandan joined #gluster
07:15 ivan_rossi joined #gluster
07:16 hackman joined #gluster
07:16 Saravanakmr joined #gluster
07:22 sakshi joined #gluster
07:27 [Enrico] joined #gluster
07:28 fsimonce joined #gluster
07:36 Wizek joined #gluster
07:36 Wizek_ joined #gluster
07:39 nehar_ joined #gluster
07:41 DV joined #gluster
07:45 haomaiwang joined #gluster
07:47 DV__ joined #gluster
07:55 Slashman joined #gluster
07:59 kdhananjay joined #gluster
08:02 msvbhat joined #gluster
08:34 kovshenin joined #gluster
08:35 msvbhat joined #gluster
08:40 ndarshan joined #gluster
08:41 Anarka joined #gluster
08:43 muneerse joined #gluster
08:44 deniszh joined #gluster
08:47 nbalacha joined #gluster
08:48 atinm joined #gluster
08:57 atalur joined #gluster
09:03 Dogethrower joined #gluster
09:35 muneerse2 joined #gluster
09:37 haomaiwang joined #gluster
09:39 Apeksha joined #gluster
09:46 shdeng joined #gluster
10:16 [o__o] joined #gluster
10:21 atinm joined #gluster
10:23 nbalacha joined #gluster
10:23 msvbhat joined #gluster
10:25 ira_ joined #gluster
10:28 bfoster joined #gluster
10:29 kotreshhr joined #gluster
10:29 gem joined #gluster
10:30 karthik___ joined #gluster
10:50 d0nn1e joined #gluster
10:53 atinm joined #gluster
10:57 paul98 joined #gluster
10:58 paul98 hey, i'm using a glusterfs on a linux machine, i've mapped a iscsi device fro mwindows to it, but it only writes to one server rather then both, do i need to do something different/
10:59 karnan_ joined #gluster
11:02 magrawal joined #gluster
11:05 magrawal joined #gluster
11:07 gowtham joined #gluster
11:10 paul98 any on
11:10 paul98 e
11:10 Jules- joined #gluster
11:13 sakshi joined #gluster
11:16 Apeksha joined #gluster
11:18 arcolife joined #gluster
11:18 atalur joined #gluster
11:18 nehar_ joined #gluster
11:18 prasanth joined #gluster
11:19 kkeithley I'm not sure what it means when you say you mapped an iSCSI device to gluster.  Off hand I'd think you mapped an iSCSI device, created a file system on it, and used it as a brick.  What is the other server?
11:20 kkeithley How did you create the volume? If you didn't use '...replica 2 ...' in your volume create command then you got a distribute volume, and it would not write to both.
11:21 paul98 sorry i'll make it bit clearer!
11:21 paul98 i've got replicated 2, 2 servers with glusterfs running and if i mount a linux drive remtoely it writes to both
11:22 paul98 so then i followed http://www.gluster.org/community/doc​umentation/index.php/GlusterFS_iSCSI
11:22 paul98 created a iscsi target on one of the glusterfs servers
11:22 paul98 i then went to windows server and created a iscsi target
11:22 paul98 but when i write to the iscsi target on the windows server it only writes to one glusterfs server
11:22 paul98 and doesn't replicate to the other one
11:23 paul98 do i need the iscsi target created on both gluster servers, or just the one?
11:26 paul98 this is my iscsi targert on the glusterfs
11:26 paul98 http://pastebin.centos.org/46946/
11:26 rastar joined #gluster
11:26 johnmilton joined #gluster
11:29 kkeithley okay, now I understand.  No, you shoujldn't need the iscsi tgt on both gluster servers.  I don't know why the writes aren't going to both bricks.
11:30 paul_ joined #gluster
11:30 paul981 grrr stupid internet
11:30 paul981 is there any where that i could look e.g logs etc? as i can mount from my linux machine and it works
11:34 kkeithley /var/log/gluster/bricks/...
11:35 kkeithley what is the cmd line you used to create the gluster volume? Specifically where are the bricks.   Your iscsi tgt config says the backing store is /dev/vol_grp/windows.  Is that one of your bricks?
11:36 kkeithley You need to write to a (fuse) mounted gluster volume in order for replication to work. You should never write directly to the brick directory.
11:41 paul981 how do i list the bricks again
11:41 paul981 i'll show you the config
11:43 johnmilton joined #gluster
11:48 paul981 http://pastebin.centos.org/46951/ - kkeithley
11:50 kkeithley okay, so /dev/vol_grp/windows is not either of the bricks.
11:51 paul981 i'm not to sure what you mean
11:51 paul981 it's all new to me
11:52 kkeithley you have a gluster volume named "windows." It is comprised of two bricks: 10.254.253.114:/storage/windows and 192.168.101.47:/storage/windows
11:53 paul981 yup
11:53 kkeithley your iscsi tgt config says the backing store for the tgt device is /dev/vol_grp_windows, which is not either of the bricks
11:54 paul981 but the /dev/vol_grp/windows is the parition
11:54 kkeithley I don't know the gluster iscsi integration, but I would sorta expect that you would mount the gluster volume (somewhere) and use that for the iscsi backing store.
11:55 kkeithley /dev/vol_grp_windows is the partition that /storage/windows is on?
11:55 kkeithley that 192.168.101.47:/storage/windows is on?
11:55 paul981 /dev/mapper/vol_grp-data 4.0T  195M  3.8T   1% /storage/data
11:55 paul981 /dev/mapper/vol_grp-windows 1008G  200M  957G   1% /storage/windows
11:56 paul981 yup
11:56 paul981 both setup the same on each server
11:56 jith_ joined #gluster
11:57 kkeithley like I said, I don't know the gluster iscsi integration, but the iscsi tgt feels wrong.
11:58 kkeithley I would guess that you should mount the gluster volume, e.g. `mount -t glusterfs 192.168.101.47:windows $somewhere`
11:58 kkeithley and then the iscsi tgt backing store would be $somewhere.
11:58 paul981 hmm i don't know
11:59 paul981 as the iscsi tgt has been created and on windows you just discover the target
12:01 paul981 cause the backing store path is mapping to the parition
12:01 nehar_ joined #gluster
12:01 paul981 rather then the brick
12:02 paul981 will go have a look at that, thanks kkeithley for the help :)
12:03 robb_nl joined #gluster
12:03 kkeithley correct, you don't want to write to the brick. But you do have to write to the volume somehow for replication to work
12:05 paul981 but the volume would be /storage/windows rather then the parition whish is /dev/mapper/vol_grp/data
12:05 paul981 ?
12:06 prasanth joined #gluster
12:06 kkeithley yeah, that's the part I don't understand either. :-/
12:07 paul981 i'll try it, i've nothing to lose as it don't currentl ywork and have no data
12:07 paul981 see if i can change the target to /storage/windows rather then /dev/mapper/vol_grp/data
12:07 paul981 thanks for the info, i'm off to watch england beat wales! :D
12:15 guhcampos joined #gluster
12:19 kotreshhr joined #gluster
12:21 Ulrar So is 3.8 not out for debian yet ? or am I just terrible at figuring out the repo url ?
12:21 Ulrar Ha, think I might have found it
12:22 jith_ hi all, i am very new to glusterfs, but i have read ceph doc and part of  glusterfs doc...  For adding nodes to cluster, in ceph initially we will set passwordless ssh so we can easily add the nodes to cluster.. but in the case of glusterfs, there is no such setup.. so just glusterfs peer probe node2 command will be enough to add a new node... how come it is possible... so anyone in the same...
12:22 jith_ ...network can add the node2 without node2's permission??? i m sorry if i am wrong
12:23 prasanth joined #gluster
12:24 Ulrar Okay yeah I found it, nevermind :)
12:32 Jules- Ulrar: are you upgrading to 3.8 right now? Tell me how it's running!
12:34 Ulrar Jules-: Yep, I'm re-installing my test cluster right now with it, I have a test that revealed a bug almost everytime in 3.7.11
12:35 Ulrar I hope it'll run fine on 3.8 :)
12:38 Manikandan joined #gluster
12:39 julim joined #gluster
12:40 Jules- Ulrar: Wasn't it this chown/chmod bug on nfs share?
12:41 Ulrar Jules-: Nop, haven't run into that one for a while. It was I/O errors when running VMs over a sharded volume
12:42 atinm joined #gluster
12:48 gowtham joined #gluster
12:51 haomaiwang joined #gluster
12:53 cornfed78 joined #gluster
12:53 ben453 joined #gluster
12:59 hgowtham joined #gluster
13:20 msvbhat joined #gluster
13:28 Wizek joined #gluster
13:28 haomaiwang joined #gluster
13:29 Wizek_ joined #gluster
13:32 kotreshhr left #gluster
13:32 haomaiwang joined #gluster
13:34 atalur joined #gluster
13:38 atalur_ joined #gluster
13:44 nehar_ joined #gluster
13:47 Jules- a two node glusterfs seems not possible anymore?! i set: cluster.quorum-type: fixed, cluster.quorum-count: 1, cluster.server-quorum-type: none having one node up but get -> gluster volume start nfs-storage -> volume start: nfs-storage: failed: Quorum not met. Volume operation not allowed.
13:47 Jules- why is that?
13:49 Ulrar Proxmox doesn't seem compatible with 3.8 yet, so I guess I'll wait a while before I can give 3.8 a shot :(
13:49 guhcampos joined #gluster
13:50 muneerse joined #gluster
13:52 Jules- Ulrar: how come?
13:53 Ulrar It times out and acts weird, I guess 3.8 isn't just a bunch of bugfixes, there must be actual change
13:54 Ulrar Either that or 3.8 really really doesn't work, but that doesn't seem likely
13:54 Ulrar Seems fine over fuse, no reason libgfapi wouldn't work I guess
13:56 amye joined #gluster
13:56 Jules- so you can't start your vm's ?
13:56 RameshN joined #gluster
13:57 Ulrar I can't even create them
13:58 Ulrar yeah I just tried, if I add the fuse mountpoint as directory, it works fine
13:58 Ulrar So the problem is proxmox and the libgfapi
13:58 Jules- what a crap. same here.
13:58 Ulrar well, qemu I guess
14:00 lalatenduM joined #gluster
14:01 kkeithley Jules, Ulrar: any smoking guns in the log files?
14:02 Ulrar kkeithley: Yeah, a colleague got something
14:03 Ulrar [2016-06-16 13:14:58.382539] E [glfs-fops.c:806:glfs_io_async_cbk] (-->/usr/lib/x86_64-linux-gnu/glusterfs/​3.8.0/xlator/debug/io-stats.so(+0x115d3) [0x7f48037ac5d3] -->/usr/lib/x86_64-linux-g​nu/libgfapi.so.0(+0xbdad) [0x7f481a92ddad] -->/usr/lib/x86_64-linux-g​nu/libgfapi.so.0(+0xbcd3) [0x7f481a92dcd3] ) 0-gfapi: invalid argument: iovec [Invalid argument]
14:03 glusterbot Ulrar: ('s karma is now -138
14:03 Ulrar Poor (
14:04 Ulrar (we are two here trying to make that work on two different clusters)
14:06 jiffin joined #gluster
14:17 jiffin joined #gluster
14:18 squizzi joined #gluster
14:21 nbalacha joined #gluster
14:39 plarsen joined #gluster
14:42 B21956 joined #gluster
14:47 Jules- why cluster.server-quorum-type: none isn't functional anymore?
15:02 muneerse joined #gluster
15:05 muneerse2 joined #gluster
15:06 hagarth joined #gluster
15:11 kshlm joined #gluster
15:11 muneerse joined #gluster
15:21 muneerse2 joined #gluster
15:22 MrAbaddon joined #gluster
15:24 harish_ joined #gluster
15:34 plarsen joined #gluster
15:34 skoduri joined #gluster
15:38 muneerse joined #gluster
15:41 skylar joined #gluster
15:41 muneerse2 joined #gluster
15:50 plarsen joined #gluster
15:51 muneerse2 joined #gluster
15:53 Manikandan joined #gluster
15:53 plarsen joined #gluster
15:53 plarsen joined #gluster
15:54 jbrooks joined #gluster
15:54 arif-ali joined #gluster
15:54 rafi1 joined #gluster
15:57 muneerse joined #gluster
16:03 muneerse2 joined #gluster
16:15 squizzi_ joined #gluster
16:18 muneerse joined #gluster
16:22 muneerse2 joined #gluster
16:24 tertiary joined #gluster
16:26 Gambit15 joined #gluster
16:32 muneerse joined #gluster
16:40 Lee1092 joined #gluster
16:43 kpease joined #gluster
16:44 muneerse2 joined #gluster
16:59 jri joined #gluster
17:10 jiffin joined #gluster
17:20 ben453 @JoeJulian: it looks like trying to create a volume using 3 pre-populated bricks does not leave gluster in a valid state. Even though all of the gfids are the same for every file, explicitly trying to full heal from that state still fails
17:21 ben453 with the message: Launching heal operation to perform full self heal on volume <volname> has been unsuccessful on bricks that are down. Please check if all brick processes are running.
17:33 JoeJulian Odd, I wonder why. Have you looked in the glustershd logs?
17:33 msvbhat joined #gluster
17:38 jlp1 joined #gluster
17:40 jlp1 if i have a volume that is replica 2 with 2 bricks.  is there a way to make it available to both hosts when they are unable to communicate?  the writes to the volume are minimal and can be controlled to prevent split-brain.  i tried cluster.quorum-type fixed and cluster.quorum-count 1, but the volume was still unavailable to the host with the second brick when the network was disconnected.
17:43 rafi joined #gluster
17:46 jlp1 i dont really even care if the volume goes read only which is what i thought would happen.  the volume is just unavailable altogether
17:48 ben453 @JoeJulian I just checked the glustershd log and it wasn't outputting anything when I tried to explicitly do a full heal (but nothing looked weird). I also tried running the daemon in debug mode to see if there were any errors but there weren't any
17:49 Jules- joined #gluster
17:49 Jules- @jilp1: you have to set: cluster.server-quorum-type: none
17:50 Jules- but it seems broken since 3.7.8 or so..
17:51 Jules- cluster.quorum-type fixed and cluster.quorum-count 1 is for write protection.
17:51 jlp1 thanks Jules.  I'll give it a try.  I have 3.7.11, so it likely won't work, but worth a try
17:51 Jules- i tested it today with 3.8 but it still doesn't work. was on 3.7.11 before
17:52 jlp1 so, my default value is off.  do you think none and off are different?
17:53 Jules- good question, probably they changed their syntax. But in the official docs they always named it: none
17:53 jlp1 thanks for the info
17:53 JoeJulian none, off, 0, false are all the same
17:54 Jules- none | server
17:54 jlp1 must just be broken like you already mentioned
17:55 jlp1 i guess i'll have to think of a different way to replicate in this case
17:56 arif-ali joined #gluster
17:56 Jules- for me, glusterfs is currently useless if the function doesn't work like it did earlier.
18:01 JoeJulian Jules-: Do you have a bug report for that?
18:02 Jules- Yes, just opened #1347329
18:02 JoeJulian bug 1347329
18:02 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1347329 high, unspecified, ---, bugs, NEW , a two node glusterfs seems not possible anymore?!
18:02 Jules- very easy to reproduce
18:03 JoeJulian (I'm lazy and want glusterbot to just post me the link)
18:03 Jules- and brought me a week of trouble.
18:03 JoeJulian Did you try cluster.quorum-type none?
18:04 JoeJulian cluster quorum and server quorum are two different things.
18:04 JoeJulian Theoretically you should be able to "gluster volume reset" those settings to get to a state that would work without quorum.
18:05 Jules- no i didn't since cluster.quorum-type is for write protection only.
18:06 Jules- and as i said. it worked since 3.7.8 or so.
18:06 JoeJulian Only if cluster.quorum-reads is true
18:06 JoeJulian My guess would be that cluster.quorum-reads was broken before 3.7.8 maybe?
18:06 Jules- finally i just follow what the docs gave me as info
18:08 Jules- and for my understanding: cluster.server-quorum-type was the one that shuts the bricks offline.
18:08 JoeJulian If I understand your needs, you don't want any quorum interference, yes?
18:09 Jules- correct
18:09 JoeJulian I think the defaults will work best. Testing that theory now.
18:10 Jules- i tested defaults before 3.8 and that didn't work either
18:12 rafi joined #gluster
18:12 kotreshhr joined #gluster
18:13 rafi joined #gluster
18:14 Jules- same effect. just tried
18:16 rafi joined #gluster
18:20 tertiary joined #gluster
18:21 tertiary to remove a brick from a storage pool without data loss, all i need t do is `remove-brick`?
18:22 JoeJulian tertiary: in theory. Always verify.
18:23 tertiary JoeJulian: ok, thanks
18:23 JoeJulian Oh! Jules-: I see where the issue is. You're trying to *start* the volume without quorum. I was thinking you were talking about *using* the volume without quorum.
18:25 Jules- that doesn't even matter
18:25 JoeJulian Why not?
18:25 Jules- because it won't come up either without manual intervention
18:26 Jules- i have to start it with force option no matter what switch i set.
18:27 JoeJulian Is the problem *starting* the volume, or *using* the volume?
18:28 JoeJulian If it's starting the volume, I don't think I can help you. That decision was made because changing the parameters of a volume while the managment daemons are disconnected causes volume definition split-brain.
18:30 JoeJulian And yes, re-confirmed that *using* a replica 2 volume (both reading and writing) with one replica down does work with the default settings.
18:31 JoeJulian 3.7.11
18:31 Jules- what decision? that volumes won't come up if the other is gone?
18:32 ira_ joined #gluster
18:32 JoeJulian bug 1177132
18:32 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1177132 medium, unspecified, ---, ggarg, CLOSED CURRENTRELEASE, glusterd: when there is loss in quorum then it should block all operation
18:33 Jules- for it sounds like a regression after this fix
18:33 JoeJulian Hmm, that doesn't look like the complete discussion.
18:34 JoeJulian I know I saw more about it...
18:35 Jules- have you tried to reproduce my issue with the steps i wrote in my bug report?
18:35 JoeJulian So I'm curious - why would you want to start a volume where one server was down?
18:35 shubhendu joined #gluster
18:36 Jules- lets say on boot for example?
18:37 JoeJulian Because here's what would happen if you start volume foo while you can only access server1. 1) gluster volume start foo force 2) server1 comes up 3) server2 comes back but is down 4) gluster volume start foo force (volume is already started)
18:38 JoeJulian Ah. But once the volume is started, it doesn't need to change state.
18:38 JoeJulian Let's see if that applies to your issue... one sec.
18:40 JoeJulian Ok, I see the real issue.
18:44 Jules- the start command is just an example for the issue
18:45 Jules- as i said. the only way i can bring back node1 online while node2 failing is to use start with force.
18:45 JoeJulian It's a red herring. The real issue for you is that there's no way (that I've found yet) to boot a machine and have the volume accessible if the servers are out of quorum.
18:46 Jules- but i disabled quorum
18:46 JoeJulian That's what I thought too.
18:46 Jules- whats the usage of a disable switch if it doesn't apply?
18:47 Jules- it doesn't even work if you set all quorum types to none, off or 0
18:48 JoeJulian Ok, found the workaround. Reset the cluster.quorum-*. Set "cluster.server-quorum-type server" and "cluster.server-quorum-ratio 0"
18:51 sage__ joined #gluster
18:55 Jules- tried that. that won't bring up the gnfs daemon
18:56 JoeJulian Ah, you're right.
18:56 JoeJulian Do you have more than one volume?
18:57 Jules- just one with nfs enabled
18:57 JoeJulian But I don't have more than one volume... That's irrelevant.
18:57 JoeJulian hmm
18:57 JoeJulian I don't see how it ever gets there. glusterd_is_any_volume_in_server_quorum should return false so does_gd_meet_server_quorum shouldn't even get tested.
19:01 ank joined #gluster
19:03 Jules- can you post that finding in the ticket i opened
19:06 JoeJulian I will. I'm just not quite done digging yet.
19:06 Jules- thanks!
19:19 Jules- the funny is. if you kick in start with force on one of the volumes. the nfs server comes up.
19:21 skoduri joined #gluster
19:29 johnmilton joined #gluster
19:30 JoeJulian Found the source of the second half of the problem: bug 948686
19:30 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=948686 unspecified, unspecified, ---, kparthas, CLOSED CURRENTRELEASE, Glusterd updates list of volumes and peers in a race'y manner.
19:30 JoeJulian Oh, wait, no.
19:32 JoeJulian I read that wrong.
19:44 amye joined #gluster
19:46 jlp1 Jules: i set cluster.server-quorum-type server and cluster.server-quorum-ratio 49%, which seems to be working for me
19:47 wnlx joined #gluster
19:47 JoeJulian I tested that and it's true, neither the nfs nor the shd start unless you manually start..force.
19:48 JoeJulian Sure, the self-heal daemon not starting doesn't matter (there's nothing for it to heal to anyway) but the nfs daemon is important.
19:49 ttkg joined #gluster
19:57 Wizek joined #gluster
19:58 hagarth joined #gluster
19:59 Wizek_ joined #gluster
20:11 DV joined #gluster
20:23 rhqq joined #gluster
20:24 rhqq greetings. is this common that a volume with 0/0 writes has  20k lookup operations - all within 60seconds ?
20:24 rhqq 0/0read/writes
20:25 rhqq sorry 8k lookup and 11k readdirp
20:27 javi404 joined #gluster
20:38 guhcampos joined #gluster
20:44 d0nn1e joined #gluster
20:56 javi404 joined #gluster
21:18 Arrfab joined #gluster
21:43 aphorise joined #gluster
21:57 cloph joined #gluster
22:04 hackman joined #gluster
22:07 DV joined #gluster
22:10 cloph Hi - got a problem with geo-replication, gluster volume geo-replication fileshare slavehost::fileshare status shows the replication as status=Active, in History Crawl, but seems to be stuck that way for long time already. however trying to call pause (or config for that matter) on the volume only results in error, with peers that are not part of the geo-replication:
22:11 cloph # gluster volume geo-replication fileshare slavehost::fileshare pause
22:11 cloph Staging failed on peer-with-no-georep. Error: Geo-replication session between fileshare and slavehost::fileshare does not exist.
22:11 cloph Staging failed on another-wo-geo-rep. Error: Geo-replication session between fileshare and slavehost::fileshare does not exist.
22:11 cloph geo-replication command failed
22:11 cloph Any hints on what goes wrong here?
22:19 manolopm joined #gluster
22:22 rhqq dont expect to be answered, this channels seems to be dead
22:23 rhqq im waiting 2hours for anyone giving me a hint with my question ;)
22:24 manolopm Hi. I have a big doubt. I have 2 servers with 2 disks of 2 TB each. I've been thinking to create a volume with stripe 2 replica 2 creating a brick with each disk and using server1:/b1 server2:/b1 server1:/b2 server2:/b2. This seems to work fine 4TB of space and if one disk or even one server fails the volume is still there. But I just found disperse volumes, I understand that disperse 4 redundancy 2 work in the same way. Any s
22:24 manolopm rhqq: What was your question?
22:26 rhqq i've a gluster setup that generates high cpu usage. volume provile says i've 0/0 read/writes, yet ~10k lookup and ~10k readdirp fops per minute
22:27 rhqq and was wondering whether this is normal, basically after 2 year uptime suddenly the cpu spiked to the roof with no reason whatsoever
22:30 manolopm I'm really new into gluster, but I'll try to check cpu and disk use with something like dstat, maybe you're swapping or maybe you get stuck on D state some process (that could be signal of a hard drive failing)
22:47 rhqq yeah, just ensured
22:47 rhqq total 0 iops
22:47 rhqq gluster using up all the cpu cycles
22:58 caitnop joined #gluster
23:27 amye joined #gluster
23:40 paul981 joined #gluster
23:46 rafi joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary