Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Matthaeus joined #gluster
00:05 sjm left #gluster
00:09 plarsen joined #gluster
00:21 ilbot3 joined #gluster
00:21 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
00:21 gdubreui joined #gluster
00:31 ceddybu kmai007:
00:31 ceddybu oops, did you have to manually remount the gluster volumes ?
00:31 kmai007 i did not
00:32 kmai007 i was trying to analyze what was the best course of action
00:32 kmai007 and waited it out
00:32 kmai007 and no more flutter
00:32 kmai007 from 09:50 - 12:01
00:32 sputnik13 joined #gluster
00:32 ceddybu everything came up without intervention ?
00:33 kmai007 it was up
00:33 kmai007 but alerts stopped by noon
00:33 kmai007 it appeared the volumes were slow?  i know on the clients 'df' was not responsive/quick
00:34 kmai007 no intervention, correct
00:34 ceddybu im with JoeJulian on blaming the network, disconnect times were all the same accross all nodes
00:35 kmai007 yep i am starting to feel that too, but coincidently, i made a vol file change thru the CLI
00:36 kmai007 i'm wondering if the change disconnected the RPC connection and it never established a good one....
00:36 kmai007 due to network congestion?
00:36 kmai007 so far fetched
00:39 ceddybu you started writing a huge file when the problem started?
00:41 kmai007 i couldn't say, its always writing,
00:41 kmai007 or reading
00:41 kmai007 static content/dynamic content
00:41 kmai007 i checked with my change management group and nothing appeared overly bloated
00:43 ceddybu anybody if a 10GB connection is always recommended or does it just depend on workload/filesize
00:44 Ark depends on how many TBs you have to move ceddybu
00:45 jbd1 ceddybu: what Ark said.  I run my 9-node GlusterFS distributed-replicated setup on 1GbE and it's okay, but the cluster can easily saturate the pipe of any client trying to read
00:45 jbd1 ceddybu: so I wind up having to distribute read traffic to a bunch of proxy servers
00:46 jbd1 ceddybu: as soon as I can, I'll upgrade to 10GbE for storage-- it's just pricey
00:46 ceddybu the NIC ?
00:46 jbd1 ceddybu: yes, the client NIC is 1GbE too
00:47 ceddybu what about using bonded interfaces, 4 x 1GbE = 4GbE? Or does it not work that way...
00:48 jbd1 ceddybu: so with a Gluster FUSE mount, the client asks all the replicas in the cluster for a particular file.  They all respond but the client only listens to the one that responds first.  So in my prod environment, I max out the internal NIC on the proxy server to send ~ 650 Mbit to the customer
00:49 jbd1 ceddybu: if you use NFS, the client will ask the GlusterFS server you have mounted for the data, and if it isn't local, that server will request the data from the server which has the data, then forward it to your client.
00:49 jbd1 ceddybu: the NFS scenario, therefore, requires more bandwidth between the GlusterFS bricks than the FUSE scenario (which tends to exceed the client's bandwidth first).
00:50 ceddybu i see, planning on using fuse mounts
00:50 jbd1 ceddybu: it all depends on where you want the pain points to be :)
00:50 gtobon What mean "One or more nodes do not support the required op version."
00:51 gtobon I'm using Gluster 3.5
00:51 jbd1 gtobon: op version is the protocol version (introduced in 3.4) for communicating between glusterfs nodes.
00:51 jbd1 gtobon: is your whole cluster on 3.5?
00:51 gtobon How can I fix this error?
00:52 gtobon I have all my clients in 3.5
00:52 gtobon I migrate from 3.3 to 3.5
00:52 gtobon Yes all my cluster is in 3.5
00:53 jbd1 gtobon: you might be seeing something related to http://docs.puppetlabs.com/guides/templating.html
00:53 Gugge joined #gluster
00:53 glusterbot Title: Using Puppet Templates — Documentation — Puppet Labs (at docs.puppetlabs.com)
00:53 jbd1 argh, wrong link
00:53 jbd1 sorry
00:53 jbd1 https://bugzilla.redhat.com/show_bug.cgi?id=1090298
00:53 glusterbot Bug 1090298: high, unspecified, 3.4.4, ravishankar, CLOSED WONTFIX, Addition of new server after upgrade from 3.3 results in peer rejected
00:54 gtobon Ta I have a look
00:54 jbd1 gtobon: basically, the op_version may not be in the info file, and it should be
00:54 jbd1 gtobon: and you can make it so by running any gluster volume set operation
00:55 gtobon That solution did not work for me
00:58 gtobon gluster volume set gv0_shares brick-log-level INFO
00:58 gtobon volume set: success
00:58 gtobon [STG1]root@nfs1.stg1.whispir.net:~/.ssh $ gluster system:: execute gsec_create
00:58 gtobon One or more nodes do not support the required op version.
00:59 yinyin joined #gluster
01:02 jbd1 gtobon: it sounds like maybe your nodes are not all actually running 3.5?  Sometimes the glusterfsd process doesn't actually get restarted when you install the new package
01:03 jbd1 gtobon: or, perhaps a node still doesn't have op_version in the info file.  I would recommend checking every brick
01:05 Ulrar_ left #gluster
01:06 gtobon Ok. let me check all the gluster versions
01:10 ceddybu jbd1: is that supposed to be in the .vol file ?
01:13 jbd1 ceddybu: op_version is stored in /var/lib/glusterd/vols/<vol name>/info
01:14 jbd1 iirc, op_version 1 is for glusterfs 3.3, op_version 2 is for 3.4.  Not sure about 3.5
01:15 gtobon type=2
01:15 gtobon count=2
01:15 gtobon status=1
01:15 gtobon sub_count=2
01:15 gtobon stripe_count=1
01:15 gtobon replica_count=2
01:16 gtobon version=5
01:16 gtobon transport-type=0
01:16 gtobon volume-id=996e4ed2-5772-421d-892f-45910163dac5
01:16 gtobon username=93881081-7f2c-4e4d-ac70-a4ce166fb55b
01:16 gtobon password=1d72f6dd-c765-4076-bcc9-cdf3416b053a
01:16 gtobon op-version=1
01:16 gtobon client-op-version=1
01:16 gtobon diagnostics.brick-log-level=INFO
01:16 gtobon server.statedump-path=/tmp/
01:16 gtobon brick-0=10.1.163.162:-shares
01:16 jbd1 @paste
01:16 glusterbot jbd1: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
01:16 gtobon brick-1=10.1.163.163:-shares
01:16 chirino joined #gluster
01:16 yinyin_ joined #gluster
01:16 gtobon @paste
01:16 glusterbot gtobon: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
01:16 jbd1 gtobon: you see op-version is 1 there, and it should be 2
01:17 gtobon in all my installations i have the same problem
01:18 gtobon Dev, Stg and prod
01:19 jbd1 gtobon: I would recommend gluster volume stop on all volumes, manually editing op-version to 3 and client-op-version to 3 in all info files on all nodes, then gluster volume start all volumes
01:20 jbd1 gtobon: I have to go, but good luck with it.
01:20 gtobon Ta
01:24 gdubreui joined #gluster
01:29 bala joined #gluster
01:35 baojg joined #gluster
01:35 marmalodak I've created the simplest possible gluster volume
01:35 marmalodak I would like to nfs mount it
01:35 marmalodak mount -t glusterfs works well
01:36 marmalodak This is the script I wrote to create the volume for myself
01:36 marmalodak http://fpaste.org/105552/
01:36 glusterbot Title: #105552 Fedora Project Pastebin (at fpaste.org)
01:36 marmalodak I'm beginning to think it's time for me to file a bug
01:36 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
01:37 marmalodak that script I pasted does everything that I understand needs to be done for this to work
01:37 marmalodak am I missing anything?
01:51 haomaiwa_ joined #gluster
02:11 MugginsO joined #gluster
02:28 XpineX joined #gluster
02:34 bharata-rao joined #gluster
02:38 hagarth joined #gluster
02:41 sjm joined #gluster
03:06 rjoseph joined #gluster
03:14 ceddybu joined #gluster
03:18 sjm joined #gluster
03:18 kshlm joined #gluster
03:26 marmalodak https://bugzilla.redhat.com/show_bug.cgi?id=1102460
03:26 glusterbot Bug 1102460: unspecified, unspecified, ---, kparthas, NEW , 0-rpc-service: Could not register with portmap
03:32 nbalachandran joined #gluster
03:37 Ark joined #gluster
03:41 shubhendu_ joined #gluster
03:42 vpshastry joined #gluster
03:44 ceddybu1 joined #gluster
03:46 itisravi joined #gluster
03:52 glusterbot New news from newglusterbugs: [Bug 1102460] 0-rpc-service: Could not register with portmap <https://bugzilla.redhat.com/show_bug.cgi?id=1102460>
04:00 sjm left #gluster
04:07 baojg joined #gluster
04:10 ppai joined #gluster
04:15 sputnik13 joined #gluster
04:18 psharma joined #gluster
04:30 gdubreui joined #gluster
04:34 kanagaraj joined #gluster
04:35 spandit joined #gluster
04:36 kdhananjay joined #gluster
04:41 harish joined #gluster
04:51 bala joined #gluster
04:52 davinder13 joined #gluster
04:54 bharata-rao joined #gluster
04:55 dusmant joined #gluster
05:07 davinder15 joined #gluster
05:08 lalatenduM joined #gluster
05:11 meghanam joined #gluster
05:11 meghanam_ joined #gluster
05:11 davinder15 joined #gluster
05:12 kumar joined #gluster
05:16 eshy joined #gluster
05:19 vpshastry joined #gluster
05:19 vpshastry left #gluster
05:21 RameshN joined #gluster
05:25 haomaiwang joined #gluster
05:33 nishanth joined #gluster
05:39 hagarth joined #gluster
05:43 haomaiwa_ joined #gluster
05:46 rastar joined #gluster
05:47 ndarshan joined #gluster
05:47 haomai___ joined #gluster
05:48 nshaikh joined #gluster
05:53 rjoseph joined #gluster
05:54 XpineX_ joined #gluster
05:57 dusmant joined #gluster
06:13 ricky-ti1 joined #gluster
06:18 vimal joined #gluster
06:37 rjoseph joined #gluster
06:38 dusmant joined #gluster
06:39 hagarth joined #gluster
06:52 bharata-rao joined #gluster
07:00 Pupeno_ joined #gluster
07:01 karimb joined #gluster
07:06 haomaiwa_ joined #gluster
07:09 haomai___ joined #gluster
07:20 keytab joined #gluster
07:41 Gugge joined #gluster
07:48 ramteid joined #gluster
07:48 fsimonce joined #gluster
07:48 ProT-0-TypE joined #gluster
07:56 ppai joined #gluster
07:57 ramteid joined #gluster
07:59 Gugge joined #gluster
08:08 ngoswami joined #gluster
08:14 DV joined #gluster
08:20 hchiramm_ joined #gluster
08:28 baojg joined #gluster
08:33 ppai joined #gluster
08:42 karimb joined #gluster
08:49 raghu joined #gluster
08:55 rastar joined #gluster
09:03 RameshN joined #gluster
09:05 hchiramm_ joined #gluster
09:06 ctria joined #gluster
09:11 jmarley joined #gluster
09:22 kshlm joined #gluster
09:23 hagarth joined #gluster
09:26 spandit joined #gluster
09:31 ppai joined #gluster
09:34 hchiramm_ joined #gluster
09:44 harish joined #gluster
09:48 [o__o] joined #gluster
09:52 [o__o] joined #gluster
09:55 glusterbot New news from newglusterbugs: [Bug 1095595] Stick to IANA standard while allocating brick ports <https://bugzilla.redhat.com/show_bug.cgi?id=1095595>
10:01 kshlm joined #gluster
10:06 ppai joined #gluster
10:10 suliba joined #gluster
10:19 qdk_ joined #gluster
10:21 kkeithley1 joined #gluster
10:21 ndarshan joined #gluster
10:22 dusmant joined #gluster
10:24 andreask joined #gluster
10:26 shubhendu_ joined #gluster
10:27 andreask joined #gluster
10:29 hchiramm_ joined #gluster
10:29 harish joined #gluster
10:44 harish joined #gluster
10:59 Gugge joined #gluster
11:02 karnan joined #gluster
11:02 dusmant joined #gluster
11:11 ngoswami joined #gluster
11:18 shubhendu_ joined #gluster
11:20 deepakcs joined #gluster
11:24 gdubreui joined #gluster
11:31 StarBeast joined #gluster
11:36 hchiramm_ joined #gluster
11:43 ppai joined #gluster
11:53 sjm joined #gluster
11:57 deepakcs_ joined #gluster
12:02 marbu joined #gluster
12:04 haomaiwa_ joined #gluster
12:08 karnan joined #gluster
12:11 itisravi joined #gluster
12:17 jag3773 joined #gluster
12:21 haomaiw__ joined #gluster
12:28 baojg joined #gluster
12:28 kdhananjay joined #gluster
12:28 edward1 joined #gluster
12:35 [o__o] joined #gluster
12:36 jag3773 joined #gluster
12:37 mbukatov joined #gluster
12:42 chirino joined #gluster
12:47 ngoswami joined #gluster
12:47 sroy joined #gluster
12:50 RameshN joined #gluster
12:54 ProT-0-TypE joined #gluster
12:54 sputnik13 joined #gluster
12:55 jag3773 joined #gluster
12:58 bennyturns joined #gluster
13:03 DV__ joined #gluster
13:05 japuzzo joined #gluster
13:07 DV joined #gluster
13:08 primechuck joined #gluster
13:11 rwheeler joined #gluster
13:12 cvdyoung joined #gluster
13:20 sroy joined #gluster
13:21 rotbeard joined #gluster
13:25 gdubreui joined #gluster
13:26 mjsmith2 joined #gluster
13:29 mortuar joined #gluster
13:33 [o__o] joined #gluster
13:39 sputnik13 joined #gluster
13:52 hagarth joined #gluster
13:52 jag3773 joined #gluster
13:53 hchiramm_ joined #gluster
13:54 theron joined #gluster
13:57 haomaiwa_ joined #gluster
13:58 vincent_1dk joined #gluster
14:06 haomaiw__ joined #gluster
14:07 haomai___ joined #gluster
14:09 davinder15 joined #gluster
14:15 jag3773 joined #gluster
14:16 teemupo left #gluster
14:22 diegows joined #gluster
14:26 wushudoin joined #gluster
14:38 eshy joined #gluster
14:39 ndk joined #gluster
14:40 recidive joined #gluster
14:53 plarsen joined #gluster
14:53 baojg joined #gluster
15:12 coredump joined #gluster
15:26 tdasilva joined #gluster
15:28 tjikkun joined #gluster
15:28 tjikkun joined #gluster
15:37 MacWinner joined #gluster
15:40 bennyturns joined #gluster
15:42 edward1 joined #gluster
15:47 edward1 joined #gluster
15:48 edward1 joined #gluster
15:49 edward1 joined #gluster
15:49 edward1 joined #gluster
15:52 zaitcev joined #gluster
15:56 sprachgenerator joined #gluster
15:56 glusterbot New news from newglusterbugs: [Bug 1094328] poor fio rand read performance <https://bugzilla.redhat.com/show_bug.cgi?id=1094328> || [Bug 1091677] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.com/show_bug.cgi?id=1091677>
16:03 firemanxbr joined #gluster
16:05 sputnik13 joined #gluster
16:08 ProT-0-TypE joined #gluster
16:08 sprachgenerator joined #gluster
16:09 vpshastry joined #gluster
16:16 Mo__ joined #gluster
16:19 nshaikh joined #gluster
16:27 theron_ joined #gluster
16:28 vpshastry left #gluster
16:31 tjikkun joined #gluster
16:48 primechuck joined #gluster
16:57 ira joined #gluster
17:01 nishanth joined #gluster
17:01 sputnik13 joined #gluster
17:02 mjsmith2 joined #gluster
17:03 siel joined #gluster
17:04 daMaestro joined #gluster
17:11 sjusthome joined #gluster
17:19 lpabon joined #gluster
17:25 Matthaeus joined #gluster
17:27 glusterbot New news from newglusterbugs: [Bug 1100251] With glusterfs update to the latest version the existing hook scripts are not saved as rpm save. <https://bugzilla.redhat.com/show_bug.cgi?id=1100251>
17:35 primechuck joined #gluster
17:36 siel joined #gluster
17:36 siel joined #gluster
17:49 daMaestro joined #gluster
17:51 Matthaeus joined #gluster
17:53 primechu_ joined #gluster
18:09 davinder6 joined #gluster
18:12 jag3773 joined #gluster
18:16 davinder6 joined #gluster
18:49 jbd1 joined #gluster
19:03 XpineX_ joined #gluster
19:13 XpineX_ joined #gluster
19:22 mjsmith2 joined #gluster
19:24 mjsmith2_ joined #gluster
19:32 systemonkey joined #gluster
19:38 cvdyoung left #gluster
19:53 Matthaeus joined #gluster
19:57 Matthaeus joined #gluster
20:04 jbrooks joined #gluster
20:07 _dist joined #gluster
20:13 Sunghost joined #gluster
20:16 _dist JoeJulian: are you around? I had to take a node down yesterday which is why I was asking about the VM thing again. I was hoping you knew an easy way to check if files are "healthy" I'm afraid to take my other node down now (performing upgrades that require reboot on both of them)
20:16 Sunghost Hello, i have to backup data from a crushed clusternode - my question is can i copy the files from the vol dir or out of the .glusterfs dir?
20:17 coredump joined #gluster
20:24 rwheeler joined #gluster
20:38 _dist I'm starting to consider that the healing issues may be related to running zfs, and how its' default xattr setup works. I'm curious if others who have had my VM healing issue were also running their gluster on zfs.
20:46 _dist ah, looks like sa vs "default" (file based) xattrs in zfs may only be a performance issue. However, I'm not giving up on it being related. When I have some time I'll do some tests. Either way I did not realize (until now) that ZFS default beahviour is to store xattr data as separate files unless you specify otherwise.
21:10 marmalodak Has anyone looked at https://bugzilla.redhat.com/show_bug.cgi?id=1102460 ?
21:10 glusterbot Bug 1102460: unspecified, unspecified, ---, kparthas, NEW , 0-rpc-service: Could not register with portmap
21:10 qdk_ joined #gluster
21:17 DanishMan joined #gluster
21:17 ry joined #gluster
21:21 * lanning wonders if marmalodak has remembered to run rpcbind...
21:23 lanning oh, script verifies via systemd...
21:32 marmalodak lanning: is there anything else I need to do?
21:32 lanning selinux?
21:33 marmalodak disabled
21:33 lanning hmmm
21:33 marmalodak my script is also at http://fpaste.org/105552/ with slightly better formatting
21:33 glusterbot Title: #105552 Fedora Project Pastebin (at fpaste.org)
21:33 marmalodak feedback welcome
21:43 ry joined #gluster
21:46 mortuar joined #gluster
21:49 chirino joined #gluster
21:58 ctria joined #gluster
22:14 sjm joined #gluster
22:26 sjm left #gluster
22:35 ctria joined #gluster
22:39 MugginsO joined #gluster
22:44 Sunghost joined #gluster
22:48 bennyturns joined #gluster
22:50 Sunghost Hello, i have a problem with glusterfs 3.5 in distributed fs. my vol is full and i want to move files via midnight commander to an extra disk, the files are copied but not deleted at the volume. Brick lock shows lot of failures beacuse of disk/volume full.
23:07 eshy joined #gluster
23:15 kmai007 from the brick logs can somebody tell me what the tail end of the message placement values mean?
23:15 kmai007 [2014-05-28 15:39:00.940103] I [server.c:762:server_rpc_notify] omhq1140: disconnecting connectionfrom vx1c6a-1432-2014/02/27-14:50:30:500073-prodstatic-client-0-0
23:17 kmai007 if its a timestamp, it is quit old
23:17 kmai007 ok i get it, it matches the time stamp of the client glusterfs process
23:37 bennyturns joined #gluster
23:38 plarsen joined #gluster
23:41 sjm joined #gluster
23:42 tdasilva left #gluster
23:50 ira joined #gluster
23:55 theron joined #gluster
23:58 gdubreui joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary