Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 joshin joined #gluster
00:34 plarsen joined #gluster
00:49 prg3 joined #gluster
00:51 gildub joined #gluster
01:04 lkoranda joined #gluster
01:05 julim joined #gluster
01:17 lkoranda joined #gluster
01:30 Twistedgrim joined #gluster
01:37 pdrakeweb joined #gluster
01:53 lyang0 joined #gluster
01:56 harish joined #gluster
02:06 nangthang joined #gluster
02:24 rbazen ll
02:27 wushudoin joined #gluster
02:35 badone_ joined #gluster
02:55 bharata-rao joined #gluster
03:03 hagarth joined #gluster
03:04 bene2 joined #gluster
03:06 lexi2 joined #gluster
03:13 kripper joined #gluster
03:13 kripper ping
03:13 glusterbot kripper: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
03:22 Kins joined #gluster
03:26 MrAbaddon joined #gluster
03:27 gem joined #gluster
03:45 karnan joined #gluster
03:47 lexi2 joined #gluster
03:49 TheSeven joined #gluster
03:50 itisravi joined #gluster
03:50 kumar joined #gluster
03:58 nbalacha joined #gluster
04:04 meghanam joined #gluster
04:08 ppai joined #gluster
04:08 shubhendu_ joined #gluster
04:14 ndarshan joined #gluster
04:18 kanagaraj joined #gluster
04:31 sakshi joined #gluster
04:50 ramteid joined #gluster
04:50 RameshN joined #gluster
04:58 deepakcs joined #gluster
04:59 jiffin joined #gluster
05:03 pppp joined #gluster
05:07 lalatenduM joined #gluster
05:07 Apeksha joined #gluster
05:08 anil joined #gluster
05:08 schandra joined #gluster
05:09 gem joined #gluster
05:17 Bhaskarakiran joined #gluster
05:19 nishanth joined #gluster
05:21 nangthang joined #gluster
05:23 spandit joined #gluster
05:23 dusmant joined #gluster
05:26 Anjana joined #gluster
05:28 ramteid joined #gluster
05:30 R0ok_ joined #gluster
05:30 lalatenduM__ joined #gluster
05:33 Philambdo joined #gluster
05:34 lalatenduM__ joined #gluster
05:46 lalatenduM__ joined #gluster
05:47 nsoffer joined #gluster
05:49 lalatenduM joined #gluster
05:50 rafi joined #gluster
05:54 ashiq joined #gluster
05:55 maveric_amitc_ joined #gluster
06:06 mjrosenb joined #gluster
06:07 * mjrosenb is getting State: Peer Rejected (Connected)
06:07 mjrosenb and the one or two things I found on gluster.org did not seem to help :-/
06:11 rgustafs joined #gluster
06:18 Manikandan joined #gluster
06:18 Manikandan_ joined #gluster
06:24 gem joined #gluster
06:27 mkzero joined #gluster
06:27 jtux joined #gluster
06:36 nsoffer joined #gluster
06:37 rafi mjrosenb: Can you elaborate ?
06:40 DV joined #gluster
06:43 [Enrico] joined #gluster
06:43 meghanam joined #gluster
06:43 Bhaskarakiran joined #gluster
06:46 hgowtham joined #gluster
06:51 ndarshan joined #gluster
07:08 dusmant joined #gluster
07:12 deniszh joined #gluster
07:12 atalur joined #gluster
07:14 meghanam joined #gluster
07:16 mbukatov joined #gluster
07:19 atalur joined #gluster
07:27 DV joined #gluster
07:30 glusterbot News from newglusterbugs: [Bug 1220270] nfs-ganesha: Rename fails while exectuing Cthon general category test <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220270>
07:31 ppai joined #gluster
07:36 Anjana joined #gluster
07:37 Bhaskarakiran joined #gluster
07:37 Slashman joined #gluster
07:38 purpleidea joined #gluster
07:38 purpleidea joined #gluster
07:45 [Enrico] joined #gluster
07:48 LebedevRI joined #gluster
08:00 kovshenin joined #gluster
08:04 _shaps_ joined #gluster
08:06 dusmant joined #gluster
08:13 spiekey joined #gluster
08:14 _shaps_ joined #gluster
08:21 ninkotech joined #gluster
08:23 jiffin cd -
08:27 _shaps_ left #gluster
08:38 ghenry joined #gluster
08:46 ndarshan joined #gluster
08:47 shubhendu_ joined #gluster
08:50 dusmant joined #gluster
08:54 gildub joined #gluster
09:07 ctria joined #gluster
09:14 Bhaskarakiran joined #gluster
09:31 glusterbot News from newglusterbugs: [Bug 1210404] BVT; Selinux throws AVC errors while running DHT automation on Rhel6.6 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210404>
09:35 Norky joined #gluster
09:36 harish joined #gluster
09:52 Anjana joined #gluster
10:01 glusterbot News from newglusterbugs: [Bug 1206461] sparse file self heal fail under xfs version 2 with speculative preallocation feature on <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206461>
10:04 jcastill1 joined #gluster
10:04 ale84ms joined #gluster
10:06 ale84ms Hello, I'm sorry for the repost, but last time I had not much time to wait and I had to log off before I received a proper answer... I have a problem with gluster. We deployed two servers running gluster, and we wrote a very large amount of small files in it, organized in directories. We have about 10000K directories right now. Write operations are triggered by a client (glusterfs version 3.4.4), and it
10:06 ale84ms wrotes only in a subset of the directories, depending on the nature of incoming data. The problem is that after a while, memory consumption goes up to 57GB. I tried the command "echo 3 /proc/sys/vm/drop_caches" but it does not help. Is this a known issue about 3.4.x version of gluster? In case, has it been solved in new versions?
10:12 ninkotech joined #gluster
10:12 ninkotech_ joined #gluster
10:18 ndarshan joined #gluster
10:18 shubhendu_ joined #gluster
10:20 karnan joined #gluster
10:22 jcastillo joined #gluster
10:22 ninkotech joined #gluster
10:22 ninkotech_ joined #gluster
10:24 soumya joined #gluster
10:25 ninkotech joined #gluster
10:25 ninkotech_ joined #gluster
10:33 ninkotech joined #gluster
10:33 ninkotech_ joined #gluster
10:42 Sjors joined #gluster
11:01 glusterbot News from newglusterbugs: [Bug 1220329] DHT Rebalance : Misleading log messages for linkfiles <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220329>
11:01 glusterbot News from newglusterbugs: [Bug 1220332] dHT rebalance: Dict_copy log messages when running rebalance on a dist-rep volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220332>
11:01 glusterbot News from newglusterbugs: [Bug 1215571] Data Tiering: add tiering set options to volume set help (cluster.tier-demote-frequency and cluster.tier-promote-frequency) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1215571>
11:03 glusterbot News from resolvedglusterbugs: [Bug 1153569] client connection establishment takes more time for rdma only volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1153569>
11:04 ira joined #gluster
11:25 ndarshan joined #gluster
11:29 firemanxbr joined #gluster
11:31 glusterbot News from newglusterbugs: [Bug 1220338] unable to start the volume with the latest beta1 rpms <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220338>
11:31 glusterbot News from newglusterbugs: [Bug 1220340] Cannot start gluster volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220340>
11:33 glusterbot News from resolvedglusterbugs: [Bug 1146492] mount hangs for rdma type transport if the network is busy. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1146492>
11:33 glusterbot News from resolvedglusterbugs: [Bug 1171142] RDMA: iozone fails with fwrite: Input/output error when write-behind translator is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1171142>
11:33 glusterbot News from resolvedglusterbugs: [Bug 1206744] current glusterfs fails to build on Ubuntu Precise: 'RDMA_OPTION_ID_REUSEADDR' undeclared <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206744>
11:48 vimal joined #gluster
12:00 rafi1 joined #gluster
12:01 glusterbot News from newglusterbugs: [Bug 1220347] Read operation on a file which is in split-brain condition is successful <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220347>
12:01 glusterbot News from newglusterbugs: [Bug 1220348] Client hung up on listing the files on a perticular directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220348>
12:02 meghanam joined #gluster
12:03 glusterbot News from resolvedglusterbugs: [Bug 1220340] Cannot start gluster volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220340>
12:04 anrao joined #gluster
12:04 ppai joined #gluster
12:07 dusmant joined #gluster
12:15 itisravi_ joined #gluster
12:15 gem joined #gluster
12:16 aaronott joined #gluster
12:22 nangthang joined #gluster
12:24 ppai joined #gluster
12:27 rafi joined #gluster
12:28 rafi joined #gluster
12:32 Prilly joined #gluster
12:41 plarsen joined #gluster
12:42 stickyboy joined #gluster
12:42 haomaiwang joined #gluster
12:49 rafi1 joined #gluster
12:55 jiffin1 joined #gluster
13:02 glusterbot News from newglusterbugs: [Bug 1220381] unable to start the volume with the latest beta1 rpms <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220381>
13:05 haomaiwang joined #gluster
13:09 rgustafs joined #gluster
13:11 nsoffer joined #gluster
13:14 bennyturns joined #gluster
13:15 nishanth joined #gluster
13:16 ale84ms I have a problem with gluster. We deployed two servers running gluster, and we wrote a very large amount of small files in it, organized in directories. We have about 10000K directories right now. Write operations are triggered by a client (glusterfs version 3.4.4), and it wrotes only in a subset of the directories, depending on the nature of incoming data. The problem is that after a while, memory
13:16 ale84ms consumption goes up to 57GB. I tried the command "echo 3 /proc/sys/vm/drop_caches" but it does not help. Is this a known issue about 3.4.x version of gluster? In case, has it been solved in new versions?
13:16 dusmant joined #gluster
13:19 theron joined #gluster
13:20 hagarth joined #gluster
13:27 Gill joined #gluster
13:29 hamiller joined #gluster
13:30 bennyturns joined #gluster
13:33 georgeh-LT2 joined #gluster
13:36 jiffin joined #gluster
13:39 harish joined #gluster
13:42 nishanth joined #gluster
13:42 julim joined #gluster
13:43 mjrosenb joined #gluster
13:44 dusmant joined #gluster
13:44 * mjrosenb guesses nobody replied, since it is not even 0700 on the west coast
13:46 Apeksha joined #gluster
13:47 dgandhi joined #gluster
13:50 mike25de Eu is not sleeping :)
13:52 mjrosenb good point.
13:54 mjrosenb rafi1: I have two bricks memoryalpha and memorybeta, currently, they don't seem to be talking to each other.
13:54 badone_ joined #gluster
13:54 rafi1 mjrosenb: how many nodes do you have ?
13:59 mjrosenb how is a node different from a brick? (I suspect the answer is 2)
13:59 Supermathie joined #gluster
14:13 MrAbaddon joined #gluster
14:16 rafi joined #gluster
14:16 jobewan joined #gluster
14:21 jobewan joined #gluster
14:22 jobewan joined #gluster
14:22 jobewan joined #gluster
14:25 premera joined #gluster
14:28 hgowtham joined #gluster
14:33 jiku joined #gluster
14:34 bene2 joined #gluster
14:34 badone_ joined #gluster
14:35 dblack joined #gluster
14:38 deepakcs joined #gluster
14:41 nbalacha joined #gluster
14:41 meghanam joined #gluster
14:42 lexi2 joined #gluster
14:42 hgowtham joined #gluster
14:48 meghanam joined #gluster
14:58 theron joined #gluster
15:00 ale84ms exit
15:08 cyberbootje joined #gluster
15:16 halfinhalfout joined #gluster
15:19 shubhendu_ joined #gluster
15:22 Danishman joined #gluster
15:23 lpabon joined #gluster
15:24 Gill joined #gluster
15:31 hgowtham joined #gluster
15:32 Manikandan joined #gluster
15:33 vovcia mjrosenb: brick is directory on node, You can have many bricks on one node
15:33 vovcia mjrosenb: bricks together combine into gluster volume
15:42 ghenry joined #gluster
15:44 kripper joined #gluster
15:47 ProT-0-TypE joined #gluster
15:48 dblack joined #gluster
15:51 vimal joined #gluster
15:54 hchiramm_home joined #gluster
16:05 cholcombe joined #gluster
16:07 gem joined #gluster
16:20 halfinhalfout1 joined #gluster
16:28 ekman- joined #gluster
16:28 bene2 joined #gluster
16:30 mibby joined #gluster
16:30 edong23 joined #gluster
16:32 JonathanS joined #gluster
16:36 hchiramm_home joined #gluster
16:38 mjrosenb ahh, then two nodes, two bricks one volume.
16:54 nishanth joined #gluster
17:00 kumar joined #gluster
17:09 Rapture_ joined #gluster
17:22 haomaiwang joined #gluster
17:23 JonathanD joined #gluster
17:23 spiekey joined #gluster
17:29 hchiramm joined #gluster
17:33 Philambdo joined #gluster
17:35 hchiramm joined #gluster
17:56 mjrosenb what is peer probe supposed to do?
17:57 halfinhalfout joined #gluster
18:02 halfinhalfout joined #gluster
18:19 rwheeler joined #gluster
18:23 mjrosenb Hostname: memorybeta Uuid: 00000000-0000-0000-0000-000000000000
18:23 mjrosenb well, that seems... odd.
18:25 eclectic_ joined #gluster
18:26 natgeorg joined #gluster
18:26 ckotil_ joined #gluster
18:26 ghenry_ joined #gluster
18:26 mrErikss1n joined #gluster
18:26 nhayashi_ joined #gluster
18:26 Intensity joined #gluster
18:26 Intensity joined #gluster
18:26 jcastillo joined #gluster
18:26 sac joined #gluster
18:26 eljrax joined #gluster
18:27 twx joined #gluster
18:27 NuxRo joined #gluster
18:27 kripper joined #gluster
18:27 cyberbootje joined #gluster
18:27 maveric_amitc_ joined #gluster
18:27 atrius joined #gluster
18:27 edwardm61 joined #gluster
18:27 CyrilPeponnet joined #gluster
18:27 jbrooks joined #gluster
18:27 kkeithley joined #gluster
18:27 T0aD joined #gluster
18:27 dastar_ joined #gluster
18:27 scuttlemonkey joined #gluster
18:27 tuxcrafter joined #gluster
18:28 frakt joined #gluster
18:28 jackdpeterson joined #gluster
18:28 RobertLaptop joined #gluster
18:28 vovcia mjrosenb: peer probe connects probed node to cluster
18:29 michatotol_ joined #gluster
18:32 capri joined #gluster
18:33 jiffin joined #gluster
18:33 anrao joined #gluster
18:35 deniszh joined #gluster
18:40 jiffin joined #gluster
18:50 mjrosenb ok, I figured out the issue with peer probe not working
18:50 mjrosenb now I can't get the volume information onto the misbehaving node :-(
18:52 mjrosenb also, it looks like this node's uuid has been 0 for several years now :-(
18:53 mjrosenb vovcia: so from a node that is in a cluster, I should probe a node that is not in the cluster?
18:53 vovcia mjrosenb: yes
18:53 mjrosenb c.c, that didn't work, so I did it the other way around
18:54 mjrosenb now they see each other, but as I said a few minutes ago, no volume information there
18:56 mjrosenb [2015-05-11 14:43:58.672997] I [glusterd-handler.c:411:glusterd_friend_find] 0-glusterd: Unable to find peer by uuid
18:56 mjrosenb that may have to do with my uuid of 0.
19:02 mjrosenb ooh, that machine thinks it has a different uuid?
19:03 * mjrosenb wonders how peer probe got the wrong uuid
19:03 * mjrosenb wonders how safe it is to manually copy the uuid over
19:03 mjrosenb along with the rest of the state
19:04 kdhananjay joined #gluster
19:05 halfinhalfout1 joined #gluster
19:08 mjrosenb wtf? I just replaced the uuid= line of /var/lib/glusterd/peers/memorybeta with the correct uuid, and started glusterd
19:08 mjrosenb but it is still claiming that the uuid is 0.
19:12 vovcia if You can, just reset all configuration
19:12 raddessi joined #gluster
19:12 vovcia peer probe should be performed from inside the cluster, or from first node, never the other way
19:12 vovcia because You will end up with 2 conflicting gluster clusters
19:16 ckotil joined #gluster
19:18 mjrosenb ok, so how do I reset it?
19:19 mjrosenb persumably, I need to run peer detach old_node from the node I accidentally ran peer probe old_node from?
19:21 vovcia dunno))
19:25 mjrosenb uhhh, wtf?
19:26 mjrosenb ok, so peer detach looks like it did the right thing
19:26 spiekey joined #gluster
19:26 mjrosenb then peer probe in the other direction left it in a strange state
19:27 mjrosenb now gluster volume info on the node that I correctly ran peer probe from says no volumes
19:27 mjrosenb but volume info from the "new" node gives all of the information
19:27 vovcia so You did in wrong direction
19:28 vovcia add peer from node with volume
19:28 vovcia why dont You just uninstall/remove all data and start from scratch?
19:28 mjrosenb "remove all data"
19:29 mjrosenb that sounds like a spectacularly painful idea.
19:30 vovcia oh this is Your production actually?
19:31 mjrosenb yeah.  it's been running for like 5 years now
19:31 mjrosenb and every once in a while something like this happens, where I need to re-initialize one of the nodes.
19:32 mjrosenb but it is a different story every time
19:35 mjrosenb ok, the format of /var/lib/glusterd/peers just chaged noticably
19:35 mjrosenb noticeably?
19:35 mjrosenb it used to have the hostname as the filename
19:36 mjrosenb now, one has an ip address, and the other has a uuid
19:36 mjrosenb and both machines seem to have reasonable data in /var/lib/glusterd/vols/*/info
19:39 redbeard joined #gluster
19:42 lpabon joined #gluster
19:47 jiffin joined #gluster
19:50 MrAbaddon joined #gluster
20:03 spiekey joined #gluster
20:06 mjrosenb ok, as far as I can tell, memoryalpha seems to think that memorybeta's uuid is 0, so whenever memorybeta asks memoryalpha to do anything, memoryalpha never responds.
20:12 Slashman joined #gluster
20:18 halfinhalfout joined #gluster
20:21 Gill joined #gluster
20:42 Gill_ joined #gluster
20:45 mjrosenb glusterd.c:95:glusterd_uuid_init] 0-glusterd: retrieved UUID: 00000000-0000-0000-0000-000000000000
20:45 mjrosenb oh, so it *thinks* its uuid is 0.
20:51 hamiller joined #gluster
21:01 haomaiw__ joined #gluster
21:03 ninkotech__ joined #gluster
21:06 mjrosenb aaargh, WHERE ON EARTH is this information being stored?
21:12 Telsin /var/lib/glusterd/glusterd.info usually
21:12 soumya joined #gluster
21:17 mjrosenb ok, I have /no/ clue how I am consistently getting 0.
21:18 mjrosenb since that file certainly has data in it.
21:18 mjrosenb Telsin: also, that isn't where a given node stores information about its peers, right?
21:19 chirino_m joined #gluster
21:19 Telsin yeah, in /var/lib/glusterd/peers/
21:20 Telsin check read perms or selinux issues?
21:20 mjrosenb certainly not selinux issues.
21:21 mjrosenb so, i found the peer file with uuid=0000000..., and edited it manually to have the correct uuid in it
21:21 mjrosenb but when I start the brick afterwards, it still thinks its uuid is 0
21:22 mjrosenb actually, I wonder if the uuid I've been setting it to is actually invalid
21:22 Telsin glusterd gets that from the peer when it probes it
21:22 Telsin so maybe it's rewriting it on you?
21:22 Telsin this a problem for the whole node, or only one brick?
21:23 mjrosenb everyone seems to agree that memorybeta's uuid is 00000...
21:24 mjrosenb ahh, and memorybeta's glusterd.info is 1 digit longer than memoryalpha's
21:24 mjrosenb so, I'm guessing parsing it with an extra character causes some sort of exception, and it translates the invalid id to 000000...
21:25 Telsin sounds likely
21:26 mjrosenb and there isn't anything like a checksum in the uuid?
21:26 mjrosenb just needs to have the right number of digits, and the -'es in the correct spots?
21:26 chirino joined #gluster
21:27 Telsin looks like a uuid to me, so probably, but don't know for sure
21:28 mjrosenb [2015-05-11 21:27:24.566959] I [glusterd.c:95:glusterd_uuid_init] 0-glusterd: retrieved UUID: 14feb846...
21:28 mjrosenb Huzzah!
21:28 mjrosenb now to find out how many of my problems were being caused by /that/
21:31 mjrosenb wow, that indeed fixed all of my problems!
21:32 Telsin nice
21:32 * mjrosenb now wonders how painful it will be to upgrade from 3.3.0
21:35 gildub joined #gluster
21:41 shaunm joined #gluster
21:42 soumya joined #gluster
22:18 Rapture joined #gluster
22:20 plarsen joined #gluster
22:21 CaptainHarold joined #gluster
22:38 DV joined #gluster
22:46 Gill joined #gluster
22:50 diegows joined #gluster
23:17 prg3 joined #gluster
23:24 verdurin joined #gluster
23:29 o5k joined #gluster
23:30 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary