Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 quest`` makes sense
00:01 JoeJulian heh, installing dpkg on Arch Linux.
00:02 quest`` JoeJulian: welcome to my world =)
00:02 * quest`` is an Arch user
00:02 quest`` but I am doing all this in ubuntu VM's
00:02 quest`` PKGBUILD is so freaking nice
00:02 JoeJulian Oh, nice. I've been doing Arch for a couple of years now.
00:02 JoeJulian yep
00:02 quest`` I forget that other distros are in packaging hell
00:03 quest`` yeah globals.c is blowing up hard
00:03 quest`` globals.c:79:28: error: 'GF_UPCALL_FLAGS_MAXVALUE' undeclared here (not in a function)
00:03 quest`` const char *gf_upcall_list[GF_UPCALL_FLAGS_MAXVALUE] = {
00:03 quest`` basically on every const
00:04 JoeJulian defined in glusterfs-fops.x
00:05 quest`` yep I see that
00:05 quest`` but make sure didn't =P
00:05 quest`` hehe
00:06 JoeJulian must be related to xdrgen
00:07 quest`` build process is still just ./autogen.sh; ./configure; make; sudo make install #right?
00:07 quest`` at least according to the readme =)
00:08 JoeJulian Looks that way. I wonder if xdrgen needs a newer autogen or something.
00:08 jockek joined #gluster
00:09 quest`` it did generate a lot of files...
00:10 JoeJulian Can you make sense of this? https://irclog.perlgeek.de/gluster-dev/2015-07-29#i_10973173
00:10 glusterbot Title: IRC log for #gluster-dev, 2015-07-29 (at irclog.perlgeek.de)
00:11 quest`` yeah, I got it creating the .c and .h files
00:11 quest`` I see them as untracked files
00:11 quest`` but I don't see them included correctly
00:33 bwerthmann joined #gluster
00:41 pdrakeweb joined #gluster
00:56 bwerthmann joined #gluster
01:16 moneylotion joined #gluster
01:17 shdeng joined #gluster
01:24 plarsen joined #gluster
01:25 baber joined #gluster
01:27 Somedream joined #gluster
01:27 edong23 joined #gluster
01:37 sadbox joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:14 prasanth joined #gluster
02:20 vinurs joined #gluster
02:23 ashiq joined #gluster
03:09 Gambit15 joined #gluster
03:39 magrawal joined #gluster
03:45 atinm joined #gluster
03:46 riyas joined #gluster
03:47 binwiederhier joined #gluster
03:47 itisravi joined #gluster
03:48 binwiederhier does anyone know if the small file improvements described here are in the latest stable 3.8? http://blog.gluster.org/2016/10/gluster-tiering-and-small-file-performance/
03:48 glusterbot Title: Gluster tiering and small file performance | Gluster Community Website (at blog.gluster.org)
03:51 binwiederhier i think i answered my own question; it's in 3.9 as per https://github.com/gluster/glusterfs/blob/v3.9.0/xlators/performance/md-cache/src/md-cache.c#L2761
03:51 glusterbot Title: glusterfs/md-cache.c at v3.9.0 · gluster/glusterfs · GitHub (at github.com)
03:53 nishanth joined #gluster
03:54 CmndrSp0ck joined #gluster
03:54 Telsin joined #gluster
03:56 dominicpg joined #gluster
03:57 sanoj joined #gluster
03:58 msvbhat joined #gluster
04:00 kramdoss_ joined #gluster
04:02 gyadav joined #gluster
04:06 binwiederhier left #gluster
04:13 karthik_us joined #gluster
04:18 Shu6h3ndu joined #gluster
04:27 jkroon joined #gluster
04:31 skoduri joined #gluster
04:33 buvanesh_kumar joined #gluster
04:35 sanoj joined #gluster
04:47 ppai joined #gluster
04:51 apandey joined #gluster
04:51 skumar joined #gluster
04:52 kramdoss_ joined #gluster
04:54 kdhananjay joined #gluster
04:55 poornima joined #gluster
04:58 jiffin joined #gluster
05:13 vinurs joined #gluster
05:21 kramdoss_ joined #gluster
05:26 ndarshan joined #gluster
05:30 hgowtham joined #gluster
05:43 kramdoss_ joined #gluster
05:44 amarts :O
05:45 apandey joined #gluster
05:59 vinurs joined #gluster
06:02 Saravanakmr joined #gluster
06:04 Karan joined #gluster
06:04 kramdoss_ joined #gluster
06:09 ppai joined #gluster
06:13 Philambdo joined #gluster
06:15 XpineX joined #gluster
06:31 ankush joined #gluster
06:31 jtux joined #gluster
06:32 jtux left #gluster
06:37 msvbhat joined #gluster
06:41 sona joined #gluster
06:47 jtux joined #gluster
06:47 jtux left #gluster
06:48 mpingu joined #gluster
06:52 Saravanakmr joined #gluster
06:56 kramdoss_ joined #gluster
07:05 anbehl joined #gluster
07:09 ju5t joined #gluster
07:13 ghenry joined #gluster
07:13 ghenry joined #gluster
07:18 ashiq joined #gluster
07:23 jkroon joined #gluster
07:31 mbukatov joined #gluster
07:32 p7mo joined #gluster
07:36 Seth_Karlo joined #gluster
07:39 vinurs joined #gluster
07:40 kblin hey folks
07:40 kblin I
07:40 kblin I've got a gluster setup where one of my servers is spinning glusterfsds at high CPU usage
07:41 kblin I've got a large volume heal info list, so I assume that's the reason
07:42 kblin now, if I shut down that particular server's gluster daemons, the "gluster volume heal <volume> info" command suddenly runs way faster
07:42 kblin like half a minute instead of half an hour
07:43 kblin if that particular node's glusterfsds are running, the heal list doesn't seem to get smaller either
07:43 kblin or at least not at a rate thats
07:44 kblin remotely acceptable
07:44 flying joined #gluster
07:45 kblin now, I'm also running the system-installed gluster which is 3.5.1, so it's ancient, and I'd like to use the time that all my stuff is down to upgrade to 3.10
07:46 kblin seeing how waiting for the heal list to get cut down for the next two weeks isn't really acceptable, but I guess upgrading while the heal list still has that many entries also is a bad idea, right?
07:47 kblin so I guess my best move is to back up the data, nuke the glusterfs setup, and start from scratch after the upgrade?
07:53 ppai joined #gluster
07:55 kblin nice, I can make gluster volume heal VOL info grind to a halt by starting glusterfsd on the bad node
08:37 kdhananjay1 joined #gluster
08:40 kdhananjay joined #gluster
08:43 msvbhat joined #gluster
08:47 karthik_us|lunch joined #gluster
08:49 subscope joined #gluster
08:49 ppai joined #gluster
08:58 Seth_Karlo joined #gluster
09:05 shwethahp joined #gluster
09:05 Seth_Karlo joined #gluster
09:16 jwd joined #gluster
09:28 hgowtham joined #gluster
09:30 kdhananjay joined #gluster
09:31 kramdoss_ joined #gluster
09:40 rastar joined #gluster
09:43 MrAbaddon joined #gluster
09:44 rafi joined #gluster
09:48 nishanth joined #gluster
10:04 kdhananjay joined #gluster
10:08 kdhananjay joined #gluster
10:12 jtux joined #gluster
10:20 zakharovvi[m] joined #gluster
10:33 kkeithley @later ask quest what was the outcome of your trusty build?
10:51 rafi1 joined #gluster
10:54 rafi joined #gluster
11:00 kdhananjay joined #gluster
11:01 bfoster joined #gluster
11:02 itisravi kblin: do you atleast see afr_log_self_heal_completion_status messages in glustershd.log of the nodes?
11:08 kkeithley Gluster Community Bug Triage in 50 minutes in #gluster-meeting
11:30 kramdoss_ joined #gluster
11:31 kkeithley @later ask JoeJulian to kick glusterbot
11:31 kkeithley @help
11:31 glusterbot kkeithley: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.
11:31 kkeithley @help later
11:31 glusterbot kkeithley: Error: There is no command "later". However, "Later" is the name of a loaded plugin, and you may be able to find its provided commands using 'list Later'.
11:31 kkeithley @list Later
11:31 glusterbot kkeithley: notes, remove, tell, and undo
11:32 kkeithley @later tell quest  what was the outcome of your trusty build?
11:32 glusterbot kkeithley: The operation succeeded.
11:44 legreffier joined #gluster
11:45 skumar_ joined #gluster
11:47 shyam joined #gluster
11:54 kkeithley Gluster Community Bug Triage in 5 minutes in #gluster-meeting
11:55 skumar joined #gluster
12:03 skoduri joined #gluster
12:18 kblin itisravi: uhm, let me check
12:20 kramdoss_ joined #gluster
12:21 kblin itisravi: not recently, but I've taken glusterfsd down on the misbehaving machine
12:21 kblin itisravi: there's some disk IO issue going on there that seems unrelated to gluster
12:22 kblin at least I can reproduce the machine locking up if I generate other IO load
12:22 itisravi glusterfsd meaning the brick process or self-heal daemon process? Both are glusterfsds.
12:22 kblin all of them
12:22 itisravi kblin: I was just trying to see if heals were actually happening even if it was slow.
12:23 kblin well, they are happening, as far as I can tell
12:23 kblin I ran a btrfs scrub overnight, and that came back fine
12:23 itisravi oh that will lead to more heals if I/O is happening from clients.
12:23 kblin no, the cluster is down, no clients have it mounted
12:23 itisravi ah ok
12:24 itisravi then better bring back the gluster processes and let the heals complete.
12:24 kblin but the heal list has ~33k entries, and in the 30 minutes it took me to bike to work, the self-heal completed around 30 heals
12:24 itisravi For these 30 heals are they data/ metadata or entry heals?
12:25 itisravi the glustershd log should say that.
12:30 unclemarc joined #gluster
12:31 kblin ok, let me see if I can find log entries for the 30 minutes I ran stuff this morning
12:31 Seth_Karlo joined #gluster
12:34 itisravi If it is data selfheals,  then setting cluster.data-self-heal-algorithm to 'full' instead of the default 'diff' might help bring down the cpu usage.
12:34 kblin itisravi: so for the time I ran it this morning, it only has a bunch of "Conflicting entries for .." and timeout errors
12:34 kblin which is not surprising, as the affected machine grinds to a halt if there's heavy IO hitting the disk
12:35 kdhananjay joined #gluster
12:35 kblin I'm still stumped as to why _that_ is happening. Both the RAID controller and SMART think the drives are OK
12:35 kblin and btrfs scrub is happy with the file system
12:36 msvbhat joined #gluster
12:36 itisravi There might be a gfid split-brain
12:37 kblin but even with a dd if=/dev/zero of=/vol/test.img bs=1G count=100 I can produce the same lock-up
12:37 kblin where /vol is the filesystem my bricks live on
12:38 itisravi kblin: Do you see " gfid differs" or "filetype differs"  alongside the conflicting entries message?
12:38 itisravi oh.
12:38 kblin so I'm suspecting that my problem really is that that node can't keep up with the writes that are happening
12:38 itisravi mhm.
12:39 kblin dd writes to a raid 5 SAS with like 12 MB/s
12:39 kblin so something clearly is at odds there
12:40 kblin unfortunately, I don't have easy access to the hardware, so if there really is a hardware issue, I'll have to do without that machine for some time
12:43 * itisravi has to go now.
12:43 kblin thanks for the help
12:43 kblin much appreciated
13:08 Humble joined #gluster
13:15 raghu joined #gluster
13:20 gyadav joined #gluster
13:30 Gambit15 JoeJulian> Gambit15: I would zero out another (probably the arbiter).
13:31 Gambit15 It was the arbiter that I zeroed out in the first place
13:32 Gambit15 No luck by the way. Over 12 hours later & the file is still in the arbiter's heal list.
13:33 Humble joined #gluster
13:34 Gambit15 I'm going to checksum both of the images on the replica pair now, rsync them if there're any differences, and then reset the file attributes on all bricks.
13:38 sanoj joined #gluster
13:38 susant joined #gluster
13:44 ira joined #gluster
13:46 skylar joined #gluster
14:01 msvbhat joined #gluster
14:05 p7mo joined #gluster
14:10 mbukatov joined #gluster
14:12 ws2k3 joined #gluster
14:21 annettec joined #gluster
14:28 sonal joined #gluster
14:44 rwheeler joined #gluster
14:49 ivan_rossi joined #gluster
14:49 ivan_rossi left #gluster
14:50 farhorizon joined #gluster
14:53 oajs joined #gluster
15:00 jiffin joined #gluster
15:02 rastar joined #gluster
15:03 vbellur joined #gluster
15:10 wushudoin joined #gluster
15:15 jtux left #gluster
15:17 msvbhat joined #gluster
15:17 riyas joined #gluster
15:22 hybrid512 joined #gluster
15:30 ankush joined #gluster
15:42 susant joined #gluster
15:47 vbellur joined #gluster
15:47 alvinstarr joined #gluster
15:52 major seems like it should be possible to write tools to diagnose these sort of issues..
16:00 MyWay left #gluster
16:06 raghu joined #gluster
16:08 major JoeJulian, you know that the description of the meetup still claims the 15th right?
16:13 Wizek_ joined #gluster
16:19 major So damn twitchy at not being able to work on this code ..
16:29 Gambit15 joined #gluster
16:29 susant joined #gluster
17:05 kpease joined #gluster
17:14 rafi joined #gluster
17:16 Seth_Karlo joined #gluster
17:18 jwd joined #gluster
17:19 nishanth joined #gluster
17:20 davidj left #gluster
17:20 social joined #gluster
17:27 programmerq joined #gluster
17:28 Seth_Karlo joined #gluster
17:29 cliluw joined #gluster
17:36 rafi joined #gluster
17:44 baber joined #gluster
17:44 sonal joined #gluster
17:45 sona joined #gluster
17:56 cliluw joined #gluster
17:58 kpease joined #gluster
17:59 rafi joined #gluster
18:01 arpu joined #gluster
18:05 major JoeJulian, passed the link to the meet-up off to some others.
18:06 kpease joined #gluster
18:18 major JoeJulian, the building isn't going to spontaneously catch fire right?
18:22 * Gambit15 lights a match
18:22 major heh
18:25 rastar joined #gluster
18:39 sonal left #gluster
18:42 arpu joined #gluster
18:53 alvinstarr I have a striped/replicated volume that I am trying to geo-replicate to a non-striped volume.  a lot of my files are 0 sized.
18:53 alvinstarr Do the master and geo-replicant need to be the same geometry ?
19:35 thatgraemeguy joined #gluster
19:36 TBlaar joined #gluster
19:39 farhorizon joined #gluster
19:43 jiffin1 joined #gluster
19:44 bluenemo joined #gluster
19:48 baber joined #gluster
19:49 thatgraemeguy joined #gluster
19:54 TBlaar joined #gluster
20:04 kpease joined #gluster
20:10 burn joined #gluster
20:30 baber joined #gluster
20:53 Gambit15 joined #gluster
20:57 baber joined #gluster
21:05 vbellur joined #gluster
21:20 baber joined #gluster
21:26 nathwill joined #gluster
21:30 rafi joined #gluster
22:00 Klas joined #gluster
22:02 raghu joined #gluster
22:25 vbellur joined #gluster
22:38 major JoeJulian, can you see SWAT from your end of the street?
22:51 Seth_Karlo joined #gluster
23:21 arpu joined #gluster
23:32 BitByteNybble110 joined #gluster
23:33 BitByteNybble110 Hey all - Have a three node cluster running 3.10 with a three way replica volume between all three nodes.  Restarted one node for maintenance, check "gluster volume heal replica-vol info" on that node, and the endpoints of the bricks for the other two peers will bounce between saying "Endpoint not connected" and giving a list of healing files
23:34 BitByteNybble110 The other two nodes that were not restarted always show all node endpoints as connected when asking for heal information about the same volume
23:34 BitByteNybble110 Is this normal?
23:37 BitByteNybble110 CPU utilization is higher on the healing node as I would expect, I've just never observed this behavior and wonder if it's something I should be worried about
23:39 Seth_Karlo joined #gluster
23:44 bwerthmann joined #gluster
23:58 ahino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary