Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 MugginsM joined #gluster
00:56 MugginsM joined #gluster
01:28 CyrilPeponnet joined #gluster
01:29 CyrilPeponnet joined #gluster
01:39 CP|AFK joined #gluster
01:50 suliba_ joined #gluster
01:54 DV joined #gluster
02:21 davetoo anybody here snarfing gluster logs into Splunk?
02:22 MugginsM joined #gluster
02:29 bala joined #gluster
02:31 DV joined #gluster
02:59 bharata-rao joined #gluster
03:06 kshlm joined #gluster
03:24 kdhananjay joined #gluster
03:34 kumar joined #gluster
03:46 itisravi joined #gluster
03:46 lalatenduM joined #gluster
04:08 shubhendu joined #gluster
04:12 harish joined #gluster
04:13 atinmu joined #gluster
04:21 RameshN joined #gluster
04:28 kumar joined #gluster
04:28 schandra joined #gluster
04:29 spandit joined #gluster
04:30 gem joined #gluster
04:31 hagarth joined #gluster
04:35 ppai joined #gluster
04:37 anoopcs joined #gluster
04:39 jiffin joined #gluster
04:39 rafi joined #gluster
04:39 gem joined #gluster
04:56 ndarshan joined #gluster
05:02 rjoseph|afk joined #gluster
05:14 nbalacha joined #gluster
05:15 rjoseph|afk joined #gluster
05:16 Manikandan joined #gluster
05:16 Manikandan_ joined #gluster
05:17 prasanth_ joined #gluster
05:21 meghanam joined #gluster
05:22 overclk joined #gluster
05:30 anil joined #gluster
05:47 Manikandan joined #gluster
05:51 ramteid joined #gluster
05:54 javi404 joined #gluster
05:57 soumya_ joined #gluster
06:00 gem joined #gluster
06:08 sprachgenerator joined #gluster
06:13 maveric_amitc_ joined #gluster
06:16 nbalacha joined #gluster
06:18 TvL2386 joined #gluster
06:21 overclk joined #gluster
06:21 soumya_ joined #gluster
06:23 anrao joined #gluster
06:23 atinmu joined #gluster
06:25 hagarth joined #gluster
06:25 gem joined #gluster
06:40 raghu joined #gluster
06:45 kanagaraj joined #gluster
06:51 nangthang joined #gluster
07:02 karnan joined #gluster
07:02 karnan anna, what is the alert u showd me last time?
07:02 atalur joined #gluster
07:04 overclk joined #gluster
07:04 atinmu joined #gluster
07:05 hagarth joined #gluster
07:21 nbalacha joined #gluster
07:23 nangthang joined #gluster
07:31 badone joined #gluster
07:34 jtux joined #gluster
07:34 bala joined #gluster
07:40 lalatenduM joined #gluster
07:53 gem joined #gluster
08:02 lalatenduM joined #gluster
08:08 tessier joined #gluster
08:08 Philambdo joined #gluster
08:11 deniszh joined #gluster
08:14 nbalacha joined #gluster
08:15 [Enrico] joined #gluster
08:17 liquidat joined #gluster
08:24 edualbus joined #gluster
08:24 _polto_ joined #gluster
08:27 soumya_ joined #gluster
08:31 hagarth joined #gluster
08:32 meghanam_ joined #gluster
08:34 fsimonce joined #gluster
08:40 ricky-ti1 joined #gluster
08:45 [Enrico] joined #gluster
08:48 rjoseph|afk joined #gluster
08:49 ricky-ticky1 joined #gluster
08:50 Slashman joined #gluster
08:53 deepakcs joined #gluster
08:56 gildub joined #gluster
08:58 Norky joined #gluster
08:59 Manikandan joined #gluster
09:08 kovshenin joined #gluster
09:17 shaunm joined #gluster
09:17 social joined #gluster
09:24 Fetch_ joined #gluster
09:25 Manikandan joined #gluster
09:25 RameshN joined #gluster
09:40 gem joined #gluster
09:47 harish joined #gluster
09:51 kanagaraj joined #gluster
10:03 meghanam_ joined #gluster
10:12 rjoseph|afk joined #gluster
10:13 nbalacha joined #gluster
10:15 Pupeno joined #gluster
10:15 Slashman_ joined #gluster
10:15 ThatGraemeGuy joined #gluster
10:15 gem joined #gluster
10:22 azar joined #gluster
10:26 eychenz joined #gluster
10:27 atalur_ joined #gluster
10:28 ira joined #gluster
10:31 tanuck joined #gluster
10:33 shaunm joined #gluster
10:34 azar Is there a bug on appending data in encryption xlator?
10:35 nishanth joined #gluster
10:36 RameshN joined #gluster
10:41 ppai joined #gluster
10:44 azar left #gluster
10:46 Norky joined #gluster
10:53 bala joined #gluster
10:57 Slashman_ joined #gluster
11:15 coredump joined #gluster
11:21 ira joined #gluster
11:38 glusterbot News from newglusterbugs: [Bug 1191176] Since 3.6.2: failed to get the 'volume file' from server <https://bugzilla.redhat.com/show_bug.cgi?id=1191176>
11:38 glusterbot News from newglusterbugs: [Bug 1192971] Disperse volume: 1x(4+2) config doesn't sustain 2 brick failures <https://bugzilla.redhat.com/show_bug.cgi?id=1192971>
11:39 hagarth joined #gluster
11:41 sprachgenerator joined #gluster
11:54 jkroon joined #gluster
11:56 jkroon [2015-02-16 11:56:14.168300] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/96d70501e04d9eb1c0c7503041c4fd16.socket failed (Invalid argument)
11:56 jkroon how can I figure out which process is logging that?
11:57 jkroon there seems to be references to an old logical volume which I'm unable to track down.
11:57 jkroon the log file is named etc-glusterfs-glusterd.vol.log - but there is no LV with a similar name.
11:58 bala joined #gluster
12:03 jkroon hmm, ok, so the filename just represents the configuration location.
12:08 glusterbot News from newglusterbugs: [Bug 1193022] Disperse volume: Delete operation failed on some of the bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1193022>
12:10 itisravi_ joined #gluster
12:10 Norky joined #gluster
12:12 jkroon /var/run/96d70501e04d9eb1c0c7503041c4fd16.socket exists, but I can't find that there should be a backing process ... ?
12:19 elico joined #gluster
12:22 mbukatov joined #gluster
12:22 Slashman_ joined #gluster
12:37 kbyrne joined #gluster
12:39 pkoro joined #gluster
12:44 ppai joined #gluster
12:52 bennyturns joined #gluster
12:52 rjoseph|afk joined #gluster
13:02 Slash__ joined #gluster
13:07 LebedevRI joined #gluster
13:12 anoopcs joined #gluster
13:16 B21956 joined #gluster
13:17 shaunm joined #gluster
13:22 Slashman joined #gluster
13:33 harish joined #gluster
13:39 sadbox joined #gluster
13:55 shubhendu joined #gluster
13:59 tdasilva joined #gluster
14:00 _polto_ joined #gluster
14:05 overclk joined #gluster
14:05 hagarth joined #gluster
14:11 raghu joined #gluster
14:22 T3 joined #gluster
14:23 T3 joined #gluster
14:30 georgeh-LT2 joined #gluster
14:31 dgandhi joined #gluster
14:46 plarsen joined #gluster
14:49 bennyturns joined #gluster
14:50 ildefonso joined #gluster
15:00 RameshN joined #gluster
15:01 wushudoin joined #gluster
15:07 mbukatov joined #gluster
15:09 genial joined #gluster
15:16 virusuy joined #gluster
15:21 RameshN joined #gluster
15:28 T3 joined #gluster
15:37 kovshenin joined #gluster
15:39 kovshenin joined #gluster
15:40 kovshenin joined #gluster
15:41 kovshenin joined #gluster
15:44 hagarth joined #gluster
15:47 haomaiwa_ joined #gluster
15:49 Folken_ uptime
15:55 lmickh joined #gluster
15:55 bala joined #gluster
15:59 wkf joined #gluster
16:01 jbrooks joined #gluster
16:01 kovshenin joined #gluster
16:03 kovshenin joined #gluster
16:03 jbrooks joined #gluster
16:08 nbalacha joined #gluster
16:10 shubhendu joined #gluster
16:14 coredump joined #gluster
16:14 _polto_ joined #gluster
16:15 davetoo joined #gluster
16:16 shaunm joined #gluster
16:22 dbruhn joined #gluster
16:23 CyrilPeponnet joined #gluster
16:24 CyrilPeponnet hey gluster comunity
16:24 CyrilPeponnet can someone take a look at http://www.gluster.org/pipermail/gluster-users/2015-February/020671.html
16:24 CyrilPeponnet and give me a hint ?
16:29 dbruhn CyrilPeponnet, did you backup your volume files before you deleted them? Why did you have to delete them?
16:29 CyrilPeponnet only the /var/lib directory
16:30 CyrilPeponnet because I messed up with something and all peer where rejecting each others
16:30 CyrilPeponnet (so I follow the gluster wiki page, but since, I can pass options I used to set)
16:31 dbruhn What version are your clients at?
16:31 cornusammonis joined #gluster
16:32 CyrilPeponnet 3.5.2
16:34 CyrilPeponnet (at most)
16:34 gem joined #gluster
16:35 CyrilPeponnet clients didn't change before the outage and after
16:35 CyrilPeponnet only one node is refusing the params and I can't figure out why
16:35 andreask left #gluster
16:38 CyrilPeponnet Is there a way to list all clients connected to a dedicated vol using glusters fuse ?
16:38 CyrilPeponnet (on a specific node)
16:39 hagarth joined #gluster
16:42 CyrilPeponnet dbruhn looks I have some 3.4 also
16:43 dbruhn hmm, ok. are all of your clients rejecting?
16:43 CyrilPeponnet all my clients are connect fine as far as I can see
16:44 dbruhn Also, I believe the operating version in the config file references something other than the version of the client, I think it references a protocol change. I could be wrong on that though.
16:46 CyrilPeponnet hmm
16:46 CyrilPeponnet to be sure to understand this will
16:46 dbruhn whats the output of "gluster volume status"
16:46 dbruhn what is the output of "gluster volume status"
16:46 CyrilPeponnet when it complains about op-version, it means that I have too old clients connected so it will refuse to bump the opversion
16:47 CyrilPeponnet right
16:47 CyrilPeponnet Lot of volumes :)
16:47 T3 joined #gluster
16:48 dbruhn well, for the volume in question
16:48 dbruhn are they all showing online
16:48 CyrilPeponnet online, nfs server enabled on 3 nodes
16:48 dbruhn and "gluster peer status" is showing good?
16:48 CyrilPeponnet yep
16:50 dbruhn I guess I would start with getting your versions straightened out between your clients and servers
16:50 CyrilPeponnet vol status myvol clients help
16:50 CyrilPeponnet it list all clients
16:51 CyrilPeponnet but with ~1200 clients it will takes some times
16:52 dbruhn not sure why you are getting different results from different servers
16:52 CyrilPeponnet because on failling vol I have mostly glusterfs fuse clients
16:52 CyrilPeponnet on others it's nfs
16:53 CyrilPeponnet so I guess that when mounting over nfs, client op-version is not relevant
16:53 dbruhn nfs is essentially putting the nfs server in front of the fuse client
16:54 CyrilPeponnet yep but I don't need glusterfs-fuse on the client to mount the volume
16:54 CyrilPeponnet could be greate if we had something like gluster vol my vol client-version :p
16:55 CyrilPeponnet (+status)
16:55 dbruhn haha yep
16:56 CyrilPeponnet how can I know witch parameters are supported per op-version / release ?
16:56 CyrilPeponnet let's say for this vol I try to set the quick-read param
16:56 CyrilPeponnet somehow some clients are not quick-read capable
16:57 dbruhn I honestly am just getting back into using gluster again, and am a little out of the loop.
16:57 CyrilPeponnet :p
16:58 dbruhn Do you have ssh access to the machines through something like unusable?
16:58 dbruhn ansible
16:58 CyrilPeponnet for most of them
16:58 CyrilPeponnet salt / puppet
16:58 davetoo haha
16:58 dbruhn could you get a version report from the clients directly then?
16:59 davetoo ansible autocorrect to unsable
16:59 davetoo unusable
16:59 dbruhn davetoo, lol, yep
16:59 CyrilPeponnet all my nodes are node listed
16:59 CyrilPeponnet clients
16:59 CyrilPeponnet because I have a bunch of stateless hypervisor
16:59 CyrilPeponnet but I will follow this lead
17:01 dbruhn I am more just thinking taking a list of your known gluster clients, and use salt / puppet to get all of the version info into a file for the clients, you could then easily remove all the stuff that's at the right version from the file and be left with a list of clients that need to be upgraded.
17:01 dbruhn Probably the long way around, but easier than manually checking each one.
17:01 CyrilPeponnet yes got it. 3.4.0 and some 3.5.2
17:03 CyrilPeponnet and one 3.4.2
17:03 cornus_ammonis joined #gluster
17:03 CyrilPeponnet but what is funny is that it works before with the same clients
17:04 RameshN joined #gluster
17:04 dbruhn That is weird, what confuses me more is that you are not getting the same result across all of your gluster servers
17:04 dbruhn it's almost as if not all of the clients are able to connect to all of the gluster servers
17:04 CyrilPeponnet I have the same result accros my nodes
17:05 CyrilPeponnet it's only on a different volume
17:05 davetoo I'm going to find myself charged with managing two or three Gluster clusters real soon now (co-worker is leaving) but I've only been learning about it for a couple of days.  He's set up the latest cluster with 3.6.2 (on CentOS7)
17:06 davetoo I'm waiting for this thread to finish (so I dont' interrupt your train of thought) to ask about snapshots and bricks
17:06 dbruhn CyrilPeponnet, sorry I miss-read your link
17:06 CyrilPeponnet I think the lead of client is the good one, on the other volume where parameters works, I have only 3.5.2 clients
17:06 dbruhn davetoo, ask away
17:06 cornusammonis joined #gluster
17:07 CyrilPeponnet I will try to update glusterfs-fuse...
17:07 davetoo two nodes of a 14-node cluster (14x2 distrib-replic) are emitting these log messages:
17:07 davetoo 0-management: Brick archvol-01-14:/run/gluster/snaps/402d099e85614b6e82b451bc8dfb4dfa/brick28/data has disconnected from glusterd
17:07 CyrilPeponnet davetoo if you have any question we are running 3 nodes un production with one other node in geo-replication
17:07 dbruhn davetoo, just to make things clearer, reference gluster server, and client.
17:07 dbruhn Node is too ambiguous
17:07 davetoo servers
17:08 davetoo clients are all nfs at this point
17:08 CyrilPeponnet node belong to cluster, cluster are made of servers :p
17:08 CyrilPeponnet ok dbruhn thanks for helping. I think I have a good lead to follow now.
17:08 davetoo does the snapshot mechanism make some kind of temporary brick?
17:10 dbruhn davit, I wouldn't want to lead you down the wrong path. I haven't used the snapshot feature at all yet. So I will leave that to someone else.
17:11 davetoo 'k
17:11 rotbeard joined #gluster
17:12 cornus_ammonis joined #gluster
17:16 T3 $ uptime
17:16 T3 09:16:07 up 11 days,  3:28,  1 user,  load average: 74.53, 73.68, 73.27
17:16 T3 =P
17:16 RameshN joined #gluster
17:17 davetoo sweet
17:28 nage joined #gluster
17:31 coredump joined #gluster
17:31 virusuy joined #gluster
17:31 genial joined #gluster
17:31 elico joined #gluster
17:31 social joined #gluster
17:31 javi404 joined #gluster
17:31 RobertLaptop joined #gluster
17:31 sac`away` joined #gluster
17:31 T0aD joined #gluster
17:31 TheSov joined #gluster
17:31 basso joined #gluster
17:31 Guest52518 joined #gluster
17:31 joshin joined #gluster
17:31 Dave2 joined #gluster
17:31 lanning joined #gluster
17:31 DJClean joined #gluster
17:31 khanku joined #gluster
17:31 yoavz joined #gluster
17:31 Bardack joined #gluster
17:31 nixpanic joined #gluster
17:31 eryc_ joined #gluster
17:31 michatotol joined #gluster
17:31 bfoster joined #gluster
17:31 ckotil joined #gluster
17:32 yoavz joined #gluster
17:33 RameshN joined #gluster
17:48 CyrilPeponnet dbruhn so the point is centos7 provide 3.4.0 for glusterfs-client
17:49 dbruhn ahh, yeah, you will want to use the community provided ones.
17:49 CyrilPeponnet which is too old I think for you servers running 3.5.2
17:49 CyrilPeponnet yep
17:49 CyrilPeponnet better to bump on 3.6 ? or I should stick with the gluster server version
17:50 dbruhn I honestly don't know, I think the versions are interoperable, but I would test it first and make sure it doesn't give you any headaches.
17:53 dbruhn There were some headaches with versions around 3.3/3.4 when they first made the client/servers be able to have mismatched versions, I think they were resolved by 3.4.2, but I honestly was stuck on 3.3.x with my old systems so I never bothered to test any of it.
17:55 _polto_ joined #gluster
18:02 siel joined #gluster
18:05 jobewan joined #gluster
18:07 jbrooks joined #gluster
18:09 jbrooks joined #gluster
18:16 lalatenduM joined #gluster
18:31 anrao joined #gluster
18:40 glusterbot News from newglusterbugs: [Bug 1193174] flock does not observe group membership <https://bugzilla.redhat.com/show_bug.cgi?id=1193174>
18:40 cyberbootje joined #gluster
19:11 deniszh joined #gluster
19:31 dbruhn joined #gluster
20:04 _polto_ joined #gluster
20:16 ttkg joined #gluster
20:21 m0zes joined #gluster
20:24 deniszh1 joined #gluster
20:29 deniszh joined #gluster
20:36 kovshenin joined #gluster
20:37 T3 joined #gluster
20:42 spiette joined #gluster
20:44 MugginsM joined #gluster
20:44 verdurin joined #gluster
20:45 spiette Hi! I want to add two brick on my two brick replica 2 setup. If I do: "volume add-brick myvol replica 3 x.x.x.x:/path/to/brick" , the load is getting really high and actual transfer rate on disk really low. Networking on the new brick shows higher output than input on the network.
20:45 spiette can I throttle that?
20:47 spiette or can I scp the content first? Will i gain something?
20:49 spiette My goal is to replace the current 2 nodes with 2 new ones
21:00 DV joined #gluster
21:03 tetreis joined #gluster
21:06 _polto_ joined #gluster
21:35 _polto_ joined #gluster
21:52 B21956 joined #gluster
21:55 daMaestro joined #gluster
21:56 jobewan joined #gluster
22:00 doekia joined #gluster
22:04 badone joined #gluster
22:11 jobewan joined #gluster
22:23 dgandhi joined #gluster
22:29 MugginsM joined #gluster
22:30 DV joined #gluster
22:40 glusterbot News from newglusterbugs: [Bug 1193225] Architecture link broken <https://bugzilla.redhat.com/show_bug.cgi?id=1193225>
22:42 gildub joined #gluster
22:51 _polto_ joined #gluster
22:52 rwheeler joined #gluster
22:55 Guest_ joined #gluster
23:04 MugginsM joined #gluster
23:06 elico joined #gluster
23:10 plarsen joined #gluster
23:18 Guest_ joined #gluster
23:34 bennyturns joined #gluster
23:36 owlbot joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary