Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 hchiramm_ joined #gluster
00:14 RameshN joined #gluster
00:24 farhorizon joined #gluster
00:52 daMaestro joined #gluster
00:53 shdeng joined #gluster
00:54 joshin joined #gluster
00:54 joshin joined #gluster
01:03 farhorizon joined #gluster
01:24 hchiramm_ joined #gluster
01:30 Lee1092 joined #gluster
01:34 nishanth joined #gluster
01:43 derjohn_mobi joined #gluster
02:05 hchiramm__ joined #gluster
02:06 newdave joined #gluster
02:22 harish_ joined #gluster
02:25 overclk joined #gluster
03:06 kramdoss_ joined #gluster
03:28 sanoj joined #gluster
03:30 magrawal joined #gluster
03:31 sankarshan joined #gluster
03:33 hagarth joined #gluster
03:41 RameshN joined #gluster
03:46 atinm joined #gluster
03:49 itisravi joined #gluster
03:51 atinm joined #gluster
03:58 shubhendu joined #gluster
04:02 julim joined #gluster
04:05 hchiramm joined #gluster
04:08 raghug joined #gluster
04:12 ramky joined #gluster
04:18 Manikandan joined #gluster
04:27 ppai joined #gluster
04:36 gem joined #gluster
04:38 farhorizon joined #gluster
04:38 poornimag joined #gluster
04:41 kramdoss_ joined #gluster
04:55 nehar joined #gluster
04:59 kramdoss_ joined #gluster
04:59 prasanth joined #gluster
05:00 sanoj joined #gluster
05:01 jiffin joined #gluster
05:01 karthik_ joined #gluster
05:05 aspandey joined #gluster
05:12 ashiq joined #gluster
05:14 Manikandan joined #gluster
05:16 kdhananjay joined #gluster
05:18 aravindavk joined #gluster
05:18 devyani7_ joined #gluster
05:18 ndarshan joined #gluster
05:21 hchiramm joined #gluster
05:23 devyani7_ joined #gluster
05:26 Apeksha joined #gluster
05:26 satya4ever joined #gluster
05:26 aspandey_ joined #gluster
05:30 hgowtham joined #gluster
05:31 Saravanakmr joined #gluster
05:31 karthik__ joined #gluster
05:32 itisravi joined #gluster
05:38 Bhaskarakiran joined #gluster
05:44 ashiq joined #gluster
05:46 Muthu_ joined #gluster
05:46 Manikandan joined #gluster
05:46 nishanth joined #gluster
06:00 mahdi joined #gluster
06:01 pur joined #gluster
06:06 kshlm joined #gluster
06:08 rastar joined #gluster
06:09 rafi joined #gluster
06:14 rafi1 joined #gluster
06:16 hchiramm joined #gluster
06:19 rastar joined #gluster
06:19 kovshenin joined #gluster
06:20 atinm joined #gluster
06:32 sanoj joined #gluster
06:33 rouven joined #gluster
06:34 natgeorg joined #gluster
06:35 satya4ever joined #gluster
06:37 msvbhat joined #gluster
06:38 cholcombe joined #gluster
06:50 karnan joined #gluster
07:03 sanoj joined #gluster
07:04 prasanth joined #gluster
07:06 Wizek joined #gluster
07:08 nbalacha joined #gluster
07:14 Philambdo joined #gluster
07:16 DV joined #gluster
07:17 Philambdo joined #gluster
07:21 gem joined #gluster
07:24 Philambdo joined #gluster
07:24 Philambdo joined #gluster
07:26 nbalacha joined #gluster
07:29 ivan_rossi joined #gluster
07:30 ppai joined #gluster
07:43 aspandey joined #gluster
07:44 raghug joined #gluster
07:44 rafi1 joined #gluster
07:46 itisravi_ joined #gluster
07:46 fsimonce joined #gluster
07:50 Wizek joined #gluster
07:51 sakshi joined #gluster
07:56 derjohn_mobi joined #gluster
07:57 nishanth joined #gluster
07:57 Saravanakmr joined #gluster
07:59 karthik__ joined #gluster
07:59 Manikandan joined #gluster
08:17 karnan joined #gluster
08:25 Siavash joined #gluster
08:41 jbrooks joined #gluster
08:43 harish_ joined #gluster
08:43 Philambdo joined #gluster
08:52 kblin left #gluster
08:53 aspandey joined #gluster
08:54 sanoj joined #gluster
08:58 ppai joined #gluster
08:58 shubhendu joined #gluster
09:02 kotreshhr joined #gluster
09:03 paul98 joined #gluster
09:03 paul98 hey guys, i got encrypted working :D
09:05 DV joined #gluster
09:07 paul98 kshlm:
09:08 kshlm paul98, Awesome!
09:08 paul98 although i've got one error
09:08 kshlm Could you update the mail thread with how you got it working?
09:08 paul98 http://pastebin.centos.org/49851/
09:09 paul98 i'm getting that error
09:09 sakshi joined #gluster
09:10 kshlm Are you using network encryption?
09:11 kshlm In that case this client is trying to fetch the volfile using an unencrypted connection and is failing.
09:11 paul98 ah
09:11 kshlm You might need to touch /var/lib/glusterd/secure-access
09:11 paul98 ah of course
09:11 paul98 i didn't do that!
09:11 kshlm s/might//
09:11 glusterbot What kshlm meant to say was: You  need to touch /var/lib/glusterd/secure-access
09:12 paul98 yes :D
09:12 paul98 works
09:12 paul98 i wonder how you could check if something is encrypted
09:15 kdhananjay joined #gluster
09:16 sanoj joined #gluster
09:17 atinm joined #gluster
09:23 kshlm paul98, If you're asking about data encryption, you can verify by looking at the file on the brick.
09:24 poornimag joined #gluster
09:26 RameshN joined #gluster
09:35 itisravi joined #gluster
09:38 paul____3 joined #gluster
09:38 paul____3 man i need to get a irc bot some where
09:44 raghug joined #gluster
09:44 aspandey_ joined #gluster
09:48 robb_nl joined #gluster
09:53 poornimag joined #gluster
09:54 itisravi joined #gluster
09:54 ashiq joined #gluster
09:55 ppai joined #gluster
10:01 arcolife joined #gluster
10:05 archit_ joined #gluster
10:06 paul____3 joined #gluster
10:17 msvbhat joined #gluster
10:21 hwcomcn joined #gluster
10:28 ashiq joined #gluster
10:29 nehar joined #gluster
10:30 robb_nl joined #gluster
10:31 ju5t joined #gluster
10:34 David_Varghese joined #gluster
10:44 BitByteNybble110 joined #gluster
10:45 devyani7_ joined #gluster
10:54 itisravi joined #gluster
11:18 aspandey joined #gluster
11:22 karnan joined #gluster
11:27 archit_ joined #gluster
11:34 devyani7_ joined #gluster
11:34 raghug joined #gluster
11:34 arcolife joined #gluster
12:06 ju5t_ joined #gluster
12:10 nehar joined #gluster
12:30 plarsen joined #gluster
12:35 unclemarc joined #gluster
12:43 julim joined #gluster
12:55 jiffin1 joined #gluster
12:56 itisravi joined #gluster
13:11 rwheeler joined #gluster
13:13 pur joined #gluster
13:17 Siavash Hi guys
13:17 ghollies joined #gluster
13:18 Siavash we are running gluster 3.7 as storage backend for oVirt
13:18 Siavash we replaced a brick yesterday and are having issues with oVirt
13:18 Siavash we are not sure if the brick replacement caused the issue
13:19 Siavash but to meake sure, we use: volume replace brick <volume> <old> <new> commit force
13:20 ghollies Hello, I was running a test which was using iptables to drop all traffic to/from a brick  in a replica 3 volume, at the end of it, with no iptables settings present, volume status shows all bricks online, but the client logs are continually saying remote operation failed - transport endpoint not connected. new writes go to only 2 bricks. and heal info seems to hang indefinitely
13:21 ghollies any ideas how to detect this partially connected state other than tailing the logs? or how to prevent it from getting there if/when a partition happens (not every partition causes this)
13:34 pur joined #gluster
13:35 ppai joined #gluster
13:36 jiffin joined #gluster
13:43 nehar joined #gluster
13:43 arcolife joined #gluster
13:50 itisravi joined #gluster
13:53 hagarth joined #gluster
13:54 nbalacha joined #gluster
14:03 dnunez joined #gluster
14:03 ju5t joined #gluster
14:06 kotreshhr left #gluster
14:08 Bhaskarakiran joined #gluster
14:09 wushudoin joined #gluster
14:15 atinm joined #gluster
14:20 farhorizon joined #gluster
14:21 Gnomethrower joined #gluster
14:23 joshin left #gluster
14:28 bowhunter joined #gluster
14:35 harish_ joined #gluster
14:39 nbalacha joined #gluster
14:41 jiffin joined #gluster
14:45 jiffin1 joined #gluster
14:47 hagarth joined #gluster
14:54 Wizek_ joined #gluster
15:00 farhoriz_ joined #gluster
15:07 karnan joined #gluster
15:10 ivan_rossi left #gluster
15:12 kdhananjay joined #gluster
15:21 farhorizon joined #gluster
15:23 magrawal joined #gluster
15:23 hchiramm joined #gluster
15:25 farhoriz_ joined #gluster
15:26 farhori__ joined #gluster
15:28 prasanth joined #gluster
15:30 bkolden joined #gluster
15:34 Bhaskarakiran_ joined #gluster
15:40 level7 joined #gluster
15:53 Siavash_ joined #gluster
15:54 shubhendu joined #gluster
15:59 derjohn_mobi joined #gluster
16:00 farhorizon joined #gluster
16:13 ashiq joined #gluster
16:14 JesperA joined #gluster
16:16 kpease joined #gluster
16:17 kpease joined #gluster
16:21 skylar joined #gluster
16:29 kotreshhr joined #gluster
16:33 karnan joined #gluster
16:37 hagarth joined #gluster
16:37 rwheeler joined #gluster
16:41 msvbhat joined #gluster
17:07 jwd joined #gluster
17:07 mpiet_cloud joined #gluster
17:08 mpiet_cloud joined #gluster
17:09 mpiet_cloud joined #gluster
17:09 jolo_ joined #gluster
17:10 jolo_ hi, i have simple 2 node (replica 2) gluster env
17:10 jolo_ On monday site B disconnected
17:11 jolo_ today site A machine had memory issues
17:11 jolo_ as glusterfs daemon have consumed a lot of mem
17:11 jolo_ 256g 661m 3936 S  0.0  1.0   1057:57 glusterfs
17:11 jolo_ 82.2g 210m 1308 S  0.0  0.3  24640:49 glusterfsd
17:12 jolo_ is it save to kill glusterfs/glusterfsd ?   I'm concerned about data corruption on the brick data folder
17:12 jolo_ s
17:16 JoeJulian ghollies: Are you using ssl encryption?
17:16 JoeJulian Siavash_: If you have a replicated volume, that should have been correct.
17:17 JoeJulian jolo_: If you have a replicated volume, killing glusterfsd on any one brick at a time is safe.
17:18 JoeJulian @processes
17:18 glusterbot JoeJulian: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
17:20 jolo_ JoeJulian: the data inside brick site A seems still to be fine.  So you mean, killing (-9) of glusterfs/glusterfsd will not trigger any final write/fsyncs/... on any file in the brick folder causing some later data corruption ?
17:21 Siavash_ JoeJulian: thanks, the problem was brick directory permissions. Not sure if the upgrade from 3.7.11(gluster.org package) to 3.7.13(centos.org package) caused this.
17:21 post-factum jolo_: generally, one doesn't want to kill glusterfsd with -9
17:21 Siavash_ I have another cluster with similar packages which I will test the upgrade and confirm the root cause of the issue.
17:24 ghollies JoeJulian: not using ssl explicitly anywhere, using ipsec between the servers.
17:25 jolo_ post-factum:  yeah. I thought -9, as then there is less chance that there are some unwanted writes
17:26 JoeJulian jolo_: I would avoid -9 if I could. That won't close the TCP connections and the client will have to wait for ping-timeout before they'll realize the server is gone.
17:26 karnan joined #gluster
17:26 ghollies JoeJulian: as a note, killing the client that had the partial connection and re-mounting it solves the problem, (client on another machine who was not doing writes during the partition was fine also)
17:28 squizzi_ joined #gluster
17:30 jolo_ JoeJulian:  client and server are running on the same machien (site A)  and only write to "their" folders -- no other writes from the server on site B
17:31 jolo_ JoeJulian:  it basically a simple HA (site-2-site) setup,  certain service run only on site A and use FS as state store.  In an event of failover, service will run on site B and data/files in FS should be there and accessible on site B then
17:33 jolo_ JoeJulian:  so kill -9  should be fine (client and server is on the same machine , site B already crashed on Mon)
17:37 JoeJulian ghollies: What version are you using?
17:38 JoeJulian jolo_: So you're using georeplication?
17:39 JoeJulian Also, regardless of what application you're killing, always kill -15 unless that isn't working. Only then kill -9.
17:39 JoeJulian imho
17:40 ppai joined #gluster
17:42 jolo_ JoeJulian:  no, just two ec2 instances in two different AZs
17:46 jolo_ JoeJulian:  any idea, why glusterfs(d) went up to whopping 82/250 GB ?  can it be that mem grows while the replica (site B) is down (not reachable) ??   btw, we are using TLS ...
17:57 JoeJulian jolo_: So you have two ec2 instances, no replication, and one is a failover? How's it supposed to have data?
17:59 raghu` joined #gluster
18:00 shubhendu_ joined #gluster
18:03 jolo_ JueJulian: no, sorry , on the glusterfs perspective 2 replicas : site a server + site b server;   from an overall application perspective it's a kind  of a failover
18:08 bowhunter joined #gluster
18:12 shubhendu_ joined #gluster
18:14 cholcombe joined #gluster
18:19 ghollies JoeJulian:  3.8.1 but we've seen it once a 3.7.12 also
18:20 arcolife joined #gluster
18:33 ahino joined #gluster
18:52 robb_nl joined #gluster
18:56 hchiramm joined #gluster
18:58 bwerthmann joined #gluster
19:28 bwerthmann joined #gluster
19:53 ben453 joined #gluster
19:55 ben453 Is there a way to detect that a client has disconnected from a brick, other than checking the client log?
19:56 hchiramm joined #gluster
20:01 JoeJulian netstat?
20:12 ben453 Yeah I suppose I could use netstat and see what is connecting to the management daemon port
20:12 ben453 So there's no "gluster supported/recommended" way
20:13 post-factum gluster volume status XXX clients?
20:14 ben453 I'm running this from a server with just the gluster client
20:25 ben453 I tried "netstat -nat | grep 'ESTABLISHED' | grep 24007" on a server running only the gluster client. I can see it connected to the server that I used to mount gluster, but I don't see it connecting to the other servers with bricks. I think I'll just grep through the client log to see if the client disconnects from any of the clients (and hasn't reconnected yet)
20:25 farhoriz_ joined #gluster
20:29 farhorizon joined #gluster
20:34 robb_nl joined #gluster
20:38 andresmoya joined #gluster
20:41 andresmoya Hi, I have a brick in a replicate volume that the filsystem has become corrupt, the os and gluster isntall is in good shape, just the mount points for the bricks I want to wipe out the file system, re create and then have the data sync from one of the good nodes, are there instructions to do this anywhere?
20:43 LenoVO joined #gluster
20:48 jbrooks joined #gluster
20:52 bwerthmann joined #gluster
20:55 Guest73193 do you know where can i find more gluster and esx documentation about performance?
20:55 hchiramm joined #gluster
20:56 jlrgraham joined #gluster
21:02 Guest73193 joined #gluster
21:04 Guest73193 left #gluster
21:04 Guest73193 joined #gluster
21:04 Guest73193 left #gluster
21:05 Lenovo joined #gluster
21:25 om joined #gluster
21:28 squizzi_ joined #gluster
21:58 hchiramm joined #gluster
21:59 muneerse2 joined #gluster
22:03 Vaelatern joined #gluster
22:15 newdave joined #gluster
22:33 David_Varghese joined #gluster
22:39 hchiramm_ joined #gluster
22:40 mpiet_cloud joined #gluster
23:00 wadeholler joined #gluster
23:06 kukulogy joined #gluster
23:54 wadeholler joined #gluster
23:56 hchiramm_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary