Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 juhaj joined #gluster
00:30 ahino joined #gluster
00:48 Alghost joined #gluster
00:53 shdeng joined #gluster
01:07 Alghost joined #gluster
01:08 Alghost_ joined #gluster
01:28 juhaj joined #gluster
01:45 Lee1092 joined #gluster
01:48 aj__ joined #gluster
02:19 harish joined #gluster
02:32 hagarth joined #gluster
03:03 Gambit15 joined #gluster
03:04 eightyeight /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh is not editing /etc/samba/smb.conf as the docs say it should be
03:05 eightyeight and /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh is not editing /etc/fstab nor unmounting /gluster/lock as per the docs
03:05 eightyeight in fact, i'm not even sure it's getting executed on `gluster volume stop <volume>'
03:18 magrawal joined #gluster
03:35 kshlm joined #gluster
03:42 kdhananjay joined #gluster
03:46 itisravi joined #gluster
03:58 kdhananjay1 joined #gluster
04:01 riyas joined #gluster
04:05 kramdoss_ joined #gluster
04:06 atinm joined #gluster
04:10 ZachLanich joined #gluster
04:16 kdhananjay joined #gluster
04:17 shubhendu joined #gluster
04:28 Telsin joined #gluster
04:29 shubhendu joined #gluster
04:30 nbalacha joined #gluster
04:33 nishanth joined #gluster
04:36 dnunez joined #gluster
04:38 bkunal joined #gluster
04:42 jiffin joined #gluster
04:43 sanoj joined #gluster
04:53 RameshN joined #gluster
04:54 ndarshan joined #gluster
04:59 karthik__ joined #gluster
05:02 rafi joined #gluster
05:04 ankitraj joined #gluster
05:10 loadtheacc joined #gluster
05:17 RameshN joined #gluster
05:21 kdhananjay joined #gluster
05:22 atalur joined #gluster
05:34 aravindavk joined #gluster
05:35 mhulsman joined #gluster
05:37 arcolife joined #gluster
05:38 raghug joined #gluster
05:39 karthik_ joined #gluster
05:41 aj__ joined #gluster
05:43 skoduri joined #gluster
05:49 ramky joined #gluster
05:52 loadtheacc joined #gluster
05:57 ndarshan joined #gluster
06:02 armyriad joined #gluster
06:03 aspandey joined #gluster
06:07 Saravanakmr joined #gluster
06:10 Bhaskarakiran joined #gluster
06:11 ppai joined #gluster
06:18 prasanth joined #gluster
06:21 karthik_ joined #gluster
06:22 David_Varghese joined #gluster
06:23 karnan joined #gluster
06:24 karnan joined #gluster
06:27 jtux joined #gluster
06:28 rastar joined #gluster
06:29 Muthu_ joined #gluster
06:38 satya4ever joined #gluster
06:42 devyani7 joined #gluster
06:54 devyani7 joined #gluster
06:57 hackman joined #gluster
06:58 ankitraj joined #gluster
07:05 jri joined #gluster
07:09 gem joined #gluster
07:10 ashiq joined #gluster
07:11 aj__ joined #gluster
07:12 k4n0 joined #gluster
07:19 fsimonce joined #gluster
07:22 kotreshhr joined #gluster
07:28 [diablo] joined #gluster
07:28 msvbhat joined #gluster
07:29 sanoj joined #gluster
07:53 Sebbo2 joined #gluster
07:58 ahino joined #gluster
08:02 harish joined #gluster
08:02 mbukatov joined #gluster
08:10 kovshenin joined #gluster
08:10 hgichon joined #gluster
08:10 hgichon_ joined #gluster
08:11 hgichon hi. is working ha-translator in glusterfs 3.7.x ?
08:15 aj__ joined #gluster
08:15 hgichon I have 2 network insterface, one is 10G(10.10.x.x internal) and the other is 1G (192.168.x.x external).
08:21 anmol joined #gluster
08:25 Debloper joined #gluster
08:25 itisravi hgichon: you can use AFR for replication/ high-availability.
08:28 hgichon itisravi: thanks for comment!, My Gluster Setup used AFR (2x2). I want to different network HA --;;
08:29 hgichon My 10G network is sometimes hangup, So I am finding another some solution.
08:30 itisravi hgichon: ah okay. I don't think the ha code has been worked on for quite a long time. I may be wrong but it  looks like ha was the precursoe to AFR.
08:31 itisravi precursor*
08:32 ivan_rossi joined #gluster
08:32 itisravi For replication across sites, you could explore geo-rep.
08:34 aspandey_ joined #gluster
08:37 itisravi joined #gluster
08:38 atalur joined #gluster
08:39 pur joined #gluster
08:41 hackman joined #gluster
08:45 David_Varghese joined #gluster
08:46 jkroon joined #gluster
09:02 itisravi_ joined #gluster
09:10 harish joined #gluster
09:11 aj__ joined #gluster
09:13 poornima joined #gluster
09:15 ahino joined #gluster
09:31 harish joined #gluster
09:37 arcolife joined #gluster
09:38 kdhananjay joined #gluster
09:39 Pupeno joined #gluster
09:40 aspandey_ joined #gluster
09:42 rastar joined #gluster
09:43 social joined #gluster
09:46 kdhananjay joined #gluster
09:51 atinm joined #gluster
10:07 karthik_ joined #gluster
10:13 Pupeno joined #gluster
10:42 Lee1092 joined #gluster
10:44 kovsheni_ joined #gluster
10:48 atinm joined #gluster
10:50 social joined #gluster
10:57 harish joined #gluster
11:04 msvbhat joined #gluster
11:13 jtux joined #gluster
11:13 LinkRage joined #gluster
11:14 harish joined #gluster
11:19 Muthu_ joined #gluster
11:20 aj__ joined #gluster
11:20 Philambdo joined #gluster
11:20 atinm joined #gluster
11:27 scc joined #gluster
11:31 kdhananjay joined #gluster
11:36 d0nn1e joined #gluster
11:36 harish joined #gluster
11:44 ankitraj joined #gluster
11:46 ppai joined #gluster
11:51 shyam joined #gluster
12:21 ppai joined #gluster
12:25 aravindavk joined #gluster
12:27 magrawal joined #gluster
12:29 unclemarc joined #gluster
12:42 atinm joined #gluster
12:43 harish joined #gluster
12:46 plarsen joined #gluster
12:46 poornima joined #gluster
12:57 shyam joined #gluster
13:01 aravindavk joined #gluster
13:04 social joined #gluster
13:04 hchiramm joined #gluster
13:08 rwheeler joined #gluster
13:12 Pupeno joined #gluster
13:12 Pupeno joined #gluster
13:15 robb_nl joined #gluster
13:29 ira joined #gluster
13:35 kpease joined #gluster
13:38 kramdoss_ joined #gluster
13:41 Sopoforic joined #gluster
13:43 squizzi joined #gluster
13:44 jiffin1 joined #gluster
13:45 msvbhat joined #gluster
13:50 Klas we are having an issue with replacing a node (virtual machine, reinstalled it, readded the same signature manually). It's a 3 replica with arbiter. When we are performing the heal, we only see the files which exists on both "true" servers when mounted by client. Is this intended=
13:50 Klas ?
13:50 Klas 3.7.14
13:55 aravindavk joined #gluster
14:01 atalur joined #gluster
14:02 post-factum Klas: didn't get your setup
14:02 Klas post-factum: 2 "real" servers, 1 arbiter
14:02 Klas 1 real server was reinstalled
14:02 jiffin1 joined #gluster
14:03 Klas during the heal, when files are being replicated back to the replaced server, only the files which are consistent across both servers are visible to the clients
14:03 Klas files and directories, btw
14:03 Klas all files are present on arbiter
14:03 Klas (but only metadata files, naturally)
14:04 Klas am I getting warmer ;)?
14:06 post-factum ah
14:06 post-factum yup, that shouldn't happen
14:06 dlambrig joined #gluster
14:07 Klas I have a likely answer (from a discussion with collegue)
14:08 ashiq_ joined #gluster
14:09 Klas since we removed it without decreasing replica count, the cluster considers them to be "equal" so to speak
14:09 Klas in my setup, I assume to correct way is to set replica to 1, then increase it to 3/1 when I have 3 peers?
14:14 darkster joined #gluster
14:14 skylar joined #gluster
14:15 darkster hello all
14:17 ivan_rossi left #gluster
14:18 post-factum Klas: it is not possible to convert replica 2 into replica 3 arbiter 1 with 3.7 branch
14:19 post-factum Klas: you need 3.8
14:21 samikshan joined #gluster
14:22 ira joined #gluster
14:22 nbalacha joined #gluster
14:24 Klas oh
14:24 Klas hmm
14:25 Klas so what would be the right way to replace a node in 3.7?
14:26 Klas and is it painless to migrate 3.7 to 3.8 (as in, only updating packages)?
14:26 post-factum Klas: I believe, moving /var/lib/glusterd should do the trick with node replacing
14:26 post-factum Klas: it is reported that rolling upgrade from 3.7 to 3.8 works
14:27 post-factum Klas: https://joejulian.name/blog/replacing-a-glusterfs-server-best-practice/
14:27 glusterbot Title: Replacing a GlusterFS Server: Best Practice (at joejulian.name)
14:29 post-factum in case you need to replace node in-place, moving /var/lib/glusterd should work for you (given you keep everything else the same — ip, node name, bricks path)
14:35 atalur joined #gluster
14:39 Klas cool thanks =)
14:41 johnmilton joined #gluster
14:45 Lee1092 joined #gluster
14:49 harish joined #gluster
14:54 tom[] joined #gluster
15:01 hybrid512 joined #gluster
15:05 wushudoin joined #gluster
15:06 dnunez joined #gluster
15:11 harish joined #gluster
15:14 hchiramm joined #gluster
15:14 aj__ joined #gluster
15:17 Smoka joined #gluster
15:18 dlambrig joined #gluster
15:19 Smoka Hi guys
15:20 Smoka have glusterfs with network compression enabled
15:20 Smoka and my write speed is very very slow while it's enabled
15:20 Smoka ~200 kB/s
15:20 Smoka do I need to set some special option?
15:20 Smoka tried lots of different performance.* options but no luck
15:20 Smoka or probably if there's an option to disable the network compression on client side ...
15:21 Smoka does someone have an idea what can I do
15:28 DV__ joined #gluster
15:29 shaunm joined #gluster
15:30 rafi1 joined #gluster
15:30 snaveen joined #gluster
15:31 snaveen I need help with setting glusterfs on AWS
15:32 snaveen I have it setup but I am running into tihs error “volume create: gv0: failed: Host 52.52.93.80 is not in 'Peer in Cluster' state"
15:32 cloph snaveen: did you probe the peer?
15:32 gnulnx_ snaveen: did you do 'gluster peer probe 52.52.93.80'?
15:32 gnulnx_ Also, you should set up a private network for this gluster traffic - don't do it over the public interfaces
15:33 snaveen Now I am getting this ubuntu@ip-172-20-0-144:~$ sudo gluster volume create gv0 replica 2  52.52.93.80:/remotedir 52.9.226.193:/remotedir
15:33 snaveen volume create: gv0: failed: Host 52.52.93.80 is not in 'Peer in Cluster' state
15:34 snaveen OK so I thought I should use the public IP
15:34 snaveen Should I use the private IP?
15:34 gnulnx_ yes
15:34 snaveen OK. Let me try that
15:34 gnulnx_ You still need to probe the peer on the public IP address.
15:38 snaveen Why should I use probe with public and private ip?
15:38 msvbhat joined #gluster
15:45 hackman joined #gluster
15:47 shubhendu joined #gluster
15:50 squizzi joined #gluster
15:50 snaveen @gnulnx~ I was able to create a volume volume create: gv0: success: please start the volume to access data
15:52 snaveen The files aren’t getting synced. How do I troubleshoot?
15:52 gnulnx_ Did you do 'gluster volume start gv0' or whatever you called it?
15:53 gnulnx_ And you don't need to probe both the public and private ip.  In fact I'd do 'gluster peer detach public.ip.addr.ess'
15:53 snaveen I don’t do start
15:54 gnulnx_ You need to start a volume before you can access it.  In additon, you should probably read up on the docs.  This is all covered.
15:54 snaveen OK. I will do that. Thanks. How do I troubleshoot? If it isn’t getting synced?
16:02 bkolden joined #gluster
16:04 mhulsman joined #gluster
16:05 snaveen _gnulnx  I see this
16:05 snaveen sudo gluster volume info
16:05 snaveen with everythig being fine , but it fails to sync
16:06 rafi joined #gluster
16:06 snaveen gnulnx_:  I don’t see files being synced
16:06 cholcombe if you wanted to check if quotas are enabled on a volume with a bash script would you just run gluster vol quota <name> enable and parse the output?  There isn't a command yet that says are quotas enabled that I know of
16:08 cholcombe oh i see.  looks like i can just parse gluster vol info :)
16:15 squizzi joined #gluster
16:26 snaveen joined #gluster
16:28 Gambit15 joined #gluster
16:28 ZachLanich joined #gluster
16:32 aj__ joined #gluster
16:35 snaveen joined #gluster
16:47 shaunm joined #gluster
16:48 kotreshhr left #gluster
16:51 snaveen joined #gluster
16:55 karnan joined #gluster
17:27 mhulsman joined #gluster
17:28 derjohn_mob joined #gluster
17:33 skoduri joined #gluster
17:42 snaveen joined #gluster
17:47 rafi joined #gluster
17:50 jiffin joined #gluster
17:57 ben453 joined #gluster
18:05 atinm joined #gluster
18:06 rwheeler joined #gluster
18:11 ZachLanich joined #gluster
18:17 shaunm joined #gluster
18:17 k4n0 joined #gluster
18:23 rastar joined #gluster
18:25 hagarth joined #gluster
18:28 ahino joined #gluster
18:49 kovshenin joined #gluster
18:50 jiffin joined #gluster
19:06 coredumb joined #gluster
19:06 coredumb Hello
19:06 glusterbot coredumb: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:06 snehring joined #gluster
19:07 coredumb I just had a big network outage between my two nodes replicated volume
19:07 kpease joined #gluster
19:07 coredumb now upon start of the volume it tells me it was expecting a volume id but found another one
19:07 coredumb for both bricks
19:08 coredumb I mean brick on both nodes have the same id apparently
19:08 coredumb should I try start forcing it ?
19:12 devyani7 joined #gluster
19:13 msvbhat joined #gluster
19:19 coredumb is it possible to change the volume id in .vol file and restart it ?
19:19 coredumb forced start doesn't start either
19:20 JoeJulian Check the mounts. I bet they're mis-mounted.
19:20 JoeJulian Are you mounting by uuid, or by device name?
19:21 devyani7 joined #gluster
19:23 coredumb JoeJulian: by label
19:23 coredumb they are correctly mounted
19:23 coredumb data is accessible where it's supposed to be
19:25 coredumb JoeJulian: any idea ?
19:26 coredumb I got 4 peers with 2 data nodes but all lost network connoectivity
19:27 coredumb seems that after reboot 2 of them were okish
19:27 coredumb had to reboot 2 others and since then volume can show up
19:28 coredumb well it was apparently in started state but unmountable
19:28 coredumb upon stopping and restarting I got the mistached id
19:32 snaveen joined #gluster
19:34 jkroon joined #gluster
19:36 JoeJulian a mismatched id means the id in the volume info file doesn't match the ,,(extended attribute) on the brick root.
19:36 glusterbot To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}
19:38 coredumb JoeJulian: how could that happen ?
19:38 coredumb the id is the same on both bricks
19:40 coredumb JoeJulian: trusted.glusterfs.volume-id=0xcbc35314cf44429ab938afbaedfbc1fa < got this on both sides
19:40 coredumb should I turn trusted.gfid to 0 ?
19:44 JoeJulian So does that match the volume id under /var/lib/glusterd/vols/$volname ?
19:45 coredumb nope
19:45 JoeJulian If you unmount your brick, is there a volume-id xattr?
19:46 coredumb nope
19:47 JoeJulian I'm not sure then. The only way for the volume id to be different is if you either had different media when creating the volume, or the volume was deleted and recreated without this media.
19:48 JoeJulian The volume-id never changes.
19:48 coredumb ...
19:48 coredumb can I change the volume id in /var/lib/glusterd/vols/$volname ?
19:48 JoeJulian I'd be worried.
19:49 coredumb cause both bricks on both nodes have the same id ...
19:49 JoeJulian If the volume-id doesn't match your volume, then this brick wasn't part of that volume. Who knows what kinds of mismatches you might have with a brick that *was* part of the volume.
19:49 coredumb but I never had any other brick ...
19:50 coredumb how can that id change for no reason ?
19:50 JoeJulian If you only have two bricks and their ids both match, then sure, I guess it might be safe to change the volume id of the info file.
19:50 JoeJulian IT CANNOT
19:50 coredumb but it somehow did :D
19:50 JoeJulian The only way for it to be different is user error.
19:50 jkroon hahaha, JoeJulian that user might be a developer :)
19:51 jkroon how are you?
19:51 JoeJulian If I was any better there'd have to be two of me.
19:53 JoeJulian coredumb: Do you have any configuration management tools that define volumes if they're not there?
19:53 coredumb JoeJulian: not at all
19:58 coredumb JoeJulian: ok it started ...
20:03 coredumb wheal doesn't show up anything wierd
20:03 coredumb but apparently I can't run full heal
20:04 coredumb from one of the nodes
20:04 coredumb the other is launching it fine
20:08 shyam joined #gluster
20:21 johnmilton joined #gluster
20:22 cvstealth joined #gluster
20:25 johnmilton joined #gluster
20:33 derjohn_mob joined #gluster
20:35 ZachLanich joined #gluster
20:35 snehring joined #gluster
20:42 snaveen joined #gluster
20:45 snaveen joined #gluster
20:49 ira joined #gluster
20:58 BitByteNybble110 joined #gluster
21:00 coredumb thx JoeJulian for the help
21:53 Pupeno joined #gluster
23:09 crashmag joined #gluster
23:17 masber joined #gluster
23:19 Jacob843 joined #gluster
23:19 plarsen joined #gluster
23:26 ZachLanich joined #gluster
23:34 gnulnx joined #gluster
23:37 Klas joined #gluster
23:38 Sopoforic joined #gluster
23:48 masber joined #gluster
23:53 masuberu joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary