Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:18 haomaiwa_ joined #gluster
00:26 Champi joined #gluster
00:36 ron-slc joined #gluster
00:38 shyam joined #gluster
00:53 plarsen joined #gluster
00:54 zhangjn joined #gluster
00:55 zhangjn joined #gluster
00:58 hagarth joined #gluster
01:12 haomaiwa_ joined #gluster
01:14 EinstCrazy joined #gluster
01:15 zhangjn joined #gluster
01:34 Lee1092 joined #gluster
01:34 DV__ joined #gluster
01:38 ahino joined #gluster
01:38 Javezim joined #gluster
01:48 kdhananjay joined #gluster
02:02 amye joined #gluster
02:08 volga629_ joined #gluster
02:08 volga629_ Hello Everyone, I see in log like
02:09 volga629_ http://fpaste.org/308417/52218976/
02:09 glusterbot Title: #308417 Fedora Project Pastebin (at fpaste.org)
02:09 volga629_ And i can't find what provides it
02:10 haomaiwa_ joined #gluster
02:11 hagarth joined #gluster
02:13 nangthang joined #gluster
02:14 harish_ joined #gluster
02:18 volga629_ I have /usr/lib64/glusterfs/3.7.6/rpc-transport/socket.so
02:18 volga629_ and gluster look in /usr/lib64/glusterfs/3.7.6/xl​ator/rpc-transport/socket.so
02:18 volga629_ is this the same library ?
02:19 nangthang joined #gluster
02:22 volga629_ nothing provides xlator/rpc-transport/socket.so so I assume it wrong directory
02:23 zhangjn joined #gluster
02:23 kotreshhr joined #gluster
02:23 nage joined #gluster
02:25 harish joined #gluster
02:25 nangthang joined #gluster
02:30 shortdudey123 joined #gluster
02:33 volga629_ I see bug report https://bugzilla.redhat.co​m/show_bug.cgi?id=1283833, when it should be released ?
02:33 glusterbot Bug 1283833: medium, unspecified, ---, amukherj, MODIFIED , Warning messages seen in glusterd logs in executing gluster volume set help
02:36 JoeJulian volga629_: I see that 3.7.7 hasn't been tagged yet, so it should be any day now.
02:37 volga629_ ok
02:37 volga629_ thanks
02:38 volga629_ I found really good monitoring scripts done in python
02:38 volga629_ need see right now integrate into check_mk
02:39 volga629_ as classic nagios check though nrpe
02:39 volga629_ through
02:39 zhangjn_ joined #gluster
02:41 sankarshan_away joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 sankarshan_ joined #gluster
02:53 zhangjn joined #gluster
03:01 haomaiwa_ joined #gluster
03:17 zhangjn_ joined #gluster
03:18 zhangjn joined #gluster
03:21 shubhendu joined #gluster
03:23 ashiq joined #gluster
03:46 atinm joined #gluster
03:53 kanagaraj joined #gluster
03:57 kotreshhr left #gluster
04:00 itisravi joined #gluster
04:01 haomaiwang joined #gluster
04:02 dusmant joined #gluster
04:02 bharata-rao joined #gluster
04:07 vmallika joined #gluster
04:10 spalai joined #gluster
04:16 amye joined #gluster
04:17 nbalacha joined #gluster
04:17 nehar joined #gluster
04:19 sakshi joined #gluster
04:25 Manikandan joined #gluster
04:29 kshlm joined #gluster
04:31 RameshN joined #gluster
04:40 kotreshhr joined #gluster
04:47 ppai joined #gluster
04:51 jiffin joined #gluster
04:56 calavera joined #gluster
05:01 haomaiwang joined #gluster
05:01 pppp joined #gluster
05:08 gem joined #gluster
05:18 ndarshan joined #gluster
05:21 frozengeek joined #gluster
05:22 aravindavk joined #gluster
05:31 Apeksha joined #gluster
05:41 arcolife joined #gluster
05:43 hgowtham joined #gluster
05:52 zhangjn joined #gluster
05:53 vimal joined #gluster
05:56 poornimag joined #gluster
05:58 zhangjn_ joined #gluster
05:59 overclk joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 anil joined #gluster
06:03 zhangjn joined #gluster
06:05 Bhaskarakiran joined #gluster
06:07 kdhananjay joined #gluster
06:12 atalur joined #gluster
06:12 rafi joined #gluster
06:26 karnan joined #gluster
06:27 spalai joined #gluster
06:28 vimal joined #gluster
06:31 skoduri joined #gluster
06:33 spalai joined #gluster
06:36 spalai left #gluster
06:49 ppai joined #gluster
06:55 deepakcs joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 spalai joined #gluster
07:08 SOLDIERz joined #gluster
07:10 dusmant joined #gluster
07:12 itisravi joined #gluster
07:12 nangthang joined #gluster
07:14 ashiq joined #gluster
07:16 kanagaraj joined #gluster
07:19 ramky joined #gluster
07:20 mhulsman joined #gluster
07:21 mhulsman joined #gluster
07:25 mhulsman joined #gluster
07:25 jtux joined #gluster
07:30 spalai joined #gluster
07:30 unforgiven512 joined #gluster
07:39 frozengeek joined #gluster
07:43 mobaer joined #gluster
07:49 EinstCrazy joined #gluster
07:51 ashiq joined #gluster
07:52 kanagaraj joined #gluster
07:53 Humble joined #gluster
08:02 haomaiwa_ joined #gluster
08:04 zhangjn_ joined #gluster
08:06 Slashman joined #gluster
08:07 jwd joined #gluster
08:09 dusmant joined #gluster
08:10 [Enrico] joined #gluster
08:16 frozengeek joined #gluster
08:17 fsimonce joined #gluster
08:25 haomaiwa_ joined #gluster
08:28 auzty joined #gluster
08:30 nangthang joined #gluster
08:33 kdhananjay joined #gluster
08:39 arcolife joined #gluster
08:39 Saravana_ joined #gluster
08:40 ivan_rossi joined #gluster
08:42 nangthang joined #gluster
08:42 F2Knight_ joined #gluster
08:43 zhangjn joined #gluster
08:48 shubhendu joined #gluster
08:48 ivan_rossi left #gluster
08:49 vmallika joined #gluster
08:51 kovshenin joined #gluster
08:53 haomaiwang joined #gluster
08:54 ron-slc joined #gluster
08:58 guest23223 joined #gluster
09:04 kshlm joined #gluster
09:04 rafi1 joined #gluster
09:05 haomaiwang joined #gluster
09:08 Slashman joined #gluster
09:18 jeek joined #gluster
09:19 jeek "gluster volume heal $volumename info|grep entries" is reporting 4192 over and over on one of the nodes, looking like it might have hung. Any idea where I can find info on troubleshooting this?
09:20 shubhendu joined #gluster
09:29 ramky joined #gluster
09:35 itisravi jeek: what version of gluster is this?
09:37 kshlm joined #gluster
09:39 jeek glusterfs 3.6.1 built on Nov 12 2014 19:02:19
09:42 kdhananjay joined #gluster
09:47 ira joined #gluster
09:50 Chinorro joined #gluster
09:50 haomaiwa_ joined #gluster
09:52 jeek It was initially up around 4800. It started dropping pretty quickly, then froze at 4192. It's still on 4192.
09:52 jeek I think the issue is related to a directory that seems to be on in the brick on one of the four servers, but doesn't appear inside the mount on any of them.
09:55 itisravi jeek: does 'gluster v heal volname info split-brain` show up anything? You could check if the gfid of that directory is same on all the bricks.
09:59 jeek Splitbrain has 0 entries on all four servers.
09:59 jeek The directory itself does appear inside of the brick on all four servers, but does not appear through the mount.
10:02 haomaiwa_ joined #gluster
10:02 itisravi can you do a `getfattr -d -m . -e hex /brick{1..4}/path/to/directory |grep gfid` on all bricks and see if it is the same?
10:03 jeek It is the same.
10:04 itisravi If you know the directory name, you can also do a  stat of the directory from the mount, giving the full path. Does that give any error?
10:05 jeek No such file or directory.
10:05 itisravi ouch
10:05 jeek On all four servers.
10:05 itisravi what does the mount log say for this operation?
10:06 atalur joined #gluster
10:06 jeek Doesn't seem to mention it. Also, as far as I can tell, the directory shouldn't exist.
10:07 itisravi why should it not exist?
10:07 jeek It was created and then deleted.
10:07 itisravi oh
10:08 MessedUpHare joined #gluster
10:08 itisravi is it a 2x2 volume?
10:08 jeek Yes.
10:10 rafi joined #gluster
10:17 rafi1 joined #gluster
10:19 arcolife joined #gluster
10:23 MessedUpHare joined #gluster
10:30 rafi joined #gluster
10:32 shubhendu joined #gluster
10:38 emitor joined #gluster
10:39 harish_ joined #gluster
10:40 ChrisNBlum joined #gluster
10:45 d0nn1e joined #gluster
10:54 mhulsman joined #gluster
10:56 jwd joined #gluster
11:01 mhulsman joined #gluster
11:01 haomaiwa_ joined #gluster
11:02 jeek itisravi: Does that matter?
11:05 itisravi jeek: It shouldn't , I was wondering if it is a replica 3 volume.
11:06 itisravi If the  4192 entries were files that were present in this directory and the directory were deleted, then they would be stale entries.
11:06 itisravi But heal info has the logic to purge stale entries when it is run the next time.
11:07 itisravi Since the entries seem to remain, they are perhaps not stale ones.
11:07 jeek The number of entries on the four servers are 0/0/4192/1
11:08 itisravi Does heal info list the names of the files or the gfids?
11:09 jeek A mix of the two.
11:09 jeek Where it lists a name, it always starts with /cache/
11:14 karnan joined #gluster
11:14 itisravi IS /cache/ the directory that was deleted?
11:16 zhangjn joined #gluster
11:17 zhangjn joined #gluster
11:17 zhangjn joined #gluster
11:20 Bhaskarakiran joined #gluster
11:22 itisravi If not, just pick up /cache/some_file and check the afr extended attributes on both bricks (getfattr ...)
11:23 vmallika joined #gluster
11:30 jeek Yes, cache is the directory that was deleted.
11:31 rafi1 joined #gluster
11:34 Bhaskarakiran joined #gluster
11:38 itisravi jeek: okay, so they might be stale entries. Can you run heal info any of the nodes and see if there are errors/warnings on  /var/log/glusterfs/glfsheal-volname.log on that node?
11:39 jeek That file exists for all volumes except the one I am having issues with.
11:40 itisravi you mean all bricks?
11:42 itisravi jeek: that is strange, it should be created when you run heal info for any volume for the first time
11:45 jeek It's back up to 1331, after getting down to 650.
11:46 itisravi jeek: 4192 to 650 means at means heals were happening.
11:47 itisravi jeek: if it is increasing, writes from clients are failing on some of the bricks.
11:47 itisravi possibly network disconnects.
11:48 jeek Is there a setting to not write if a physical volume is X% full?
11:52 itisravi you can explore the min-free-disk option or volume quotas.
11:54 jeek Quotas are off.
11:54 jeek Where can I check min-free-disk?
11:54 atalur joined #gluster
11:58 zhangjn_ joined #gluster
11:59 R0ok_ jeek: cluster.min-free-disk volume option
11:59 R0ok_ jeek: https://gluster.readthedocs.org/en/latest/​Administrator%20Guide/Managing%20Volumes/
11:59 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.org)
11:59 zhangjn joined #gluster
12:00 vmallika joined #gluster
12:05 jeek I've read the page, not seeing a way to get the current value.
12:07 itisravi Use gluster vol get
12:07 itisravi default is 10%
12:07 itisravi I'm not sure if the command is there in 3.6 though.
12:07 jeek Unrecognized word.
12:08 itisravi If default values are overridden using volume set, it will be displayed in volinfo output.
12:11 nottc joined #gluster
12:13 jeek OK, so with the physical volume 95% full, that's probably the cause of some of these issues.
12:14 d-fence joined #gluster
12:18 EinstCrazy joined #gluster
12:23 kovshenin joined #gluster
12:25 jeek Is it possible to just delete all files that say they need to be healed?
12:26 RameshN joined #gluster
12:32 ppai joined #gluster
12:40 spalai left #gluster
12:42 R0ok_ jeek: on the brick or on the volume mount point ?
12:45 DV joined #gluster
12:49 EinstCrazy joined #gluster
12:51 argonius joined #gluster
12:53 argonius hi *
12:53 argonius i am running a distributed-replicated gluster volume
12:53 argonius but now running gluster volume VG01 status i can see different Free Disk Space on every node
12:53 EinstCrazy joined #gluster
12:54 argonius and wondering a bit
12:54 argonius http://pastebin.com/p5b29482
12:54 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:54 argonius http://fpaste.org/308568/
12:54 glusterbot Title: #308568 Fedora Project Pastebin (at fpaste.org)
12:56 MessedUpHare_ joined #gluster
12:56 inodb joined #gluster
13:00 jeek R0ok: Either would be fine.
13:00 MessedUpHare joined #gluster
13:06 mhulsman joined #gluster
13:14 onebree left #gluster
13:15 onebree joined #gluster
13:16 onebree JoeJulian: I am running gluster 3.5.3, and CentOS 7. My client is not in a container. I am running the mount command on my local machine, and I want the bricks to be on other servers we have in our network (dev0 - dev2).
13:17 onebree I will need to check what dmesg is -- never heard of it before. (Kind of new to Linux stuff -- I just know the basics)
13:17 rafi joined #gluster
13:25 chirino joined #gluster
13:26 plarsen joined #gluster
13:28 dlambrig left #gluster
13:31 ndevos onebree: "dmesg" is a command you can execute, it'll print the recent kernel log
13:31 ndevos onebree: most of it would also be available in /var/log/messages (or on newer distributions "journalctl")
13:39 Manikandan joined #gluster
13:49 onebree ndevos: Does the output of dmesg contain sensitive information? My output is 800+ lines, so I do not want to miss anything as I skip (like IPs, hostnames, etc)
13:50 kdhananjay joined #gluster
13:51 ndevos onebree: it might
13:53 onebree Okay. Let me read through it, then I will tag you and JoeJ to read it.
13:54 mobaer joined #gluster
13:54 onebree I mean, I can submit my ruby code, but I would rather (and was advised to) test locally first, before having my boss test it and waste time because of errors I could have caught first
13:54 d0nn1e joined #gluster
14:03 onebree ndevos / JoeJulian: my dmesg logs. https://gist.github.com/on​ebree/dfa295f302a422d5452d
14:03 glusterbot Title: dmesg.log · GitHub (at gist.github.com)
14:06 mhulsman1 joined #gluster
14:22 bennyturns joined #gluster
14:25 unclemarc joined #gluster
14:32 onebree Great news! Boss helped me out -- I needed to edit resolv.conf, and am now able to mount :-)
14:36 harold joined #gluster
14:39 kotreshhr joined #gluster
14:44 onebree Thanks for all your help, ndevos, JoeJulian, and everyone else!
14:44 onebree left #gluster
14:51 haomaiwang joined #gluster
14:53 mhulsman joined #gluster
14:55 nangthang joined #gluster
15:00 nbalacha joined #gluster
15:02 haomaiwa_ joined #gluster
15:13 shyam joined #gluster
15:15 mobaer joined #gluster
15:16 jiffin joined #gluster
15:18 adamaN joined #gluster
15:23 nangthang joined #gluster
15:31 bowhunter joined #gluster
15:31 hagarth joined #gluster
15:32 jiffin1 joined #gluster
15:33 coredump joined #gluster
15:35 skoduri joined #gluster
15:38 shyam joined #gluster
15:51 farhorizon joined #gluster
15:52 zhangjn joined #gluster
15:52 kshlm joined #gluster
15:55 natgeorg joined #gluster
15:55 rafi joined #gluster
15:55 zhangjn joined #gluster
16:00 neofob joined #gluster
16:00 shyam joined #gluster
16:01 mobaer joined #gluster
16:01 haomaiwa_ joined #gluster
16:04 atalur joined #gluster
16:04 farhorizon joined #gluster
16:05 Bhaskarakiran joined #gluster
16:05 chirino joined #gluster
16:06 kshlm joined #gluster
16:30 squizzi joined #gluster
16:35 rafi1 joined #gluster
16:35 jiffin1 joined #gluster
16:36 chirino joined #gluster
16:37 coredump joined #gluster
16:40 rafi joined #gluster
16:48 inodb_ joined #gluster
16:52 atalur joined #gluster
16:53 shubhendu joined #gluster
17:01 haomaiwa_ joined #gluster
17:03 RedW joined #gluster
17:05 calavera joined #gluster
17:08 Manikandan joined #gluster
17:09 dblack joined #gluster
17:17 bowhunter joined #gluster
17:22 Humble joined #gluster
17:22 shyam joined #gluster
17:27 jwaibel joined #gluster
17:27 PatNarciso joined #gluster
17:31 rotbeard joined #gluster
17:33 siel joined #gluster
17:43 Rapture joined #gluster
17:43 bit4man joined #gluster
17:45 F2Knight joined #gluster
17:59 amye joined #gluster
18:01 haomaiwa_ joined #gluster
18:03 JesperA_ joined #gluster
18:19 shaunm joined #gluster
18:21 atalur joined #gluster
18:29 mhulsman joined #gluster
18:46 Manikandan joined #gluster
18:47 neofob joined #gluster
18:48 rabhat joined #gluster
18:54 neofob left #gluster
19:01 haomaiwa_ joined #gluster
19:13 harold joined #gluster
19:23 chirino joined #gluster
19:48 dgbaley joined #gluster
19:48 dgbaley I just enabled a usage limit on a fairly large volume. The listing doesn't show that much is used. Is this going to update itself eventually or can I force it?
19:58 unclemarc_ joined #gluster
20:01 haomaiwa_ joined #gluster
20:08 sloop joined #gluster
20:09 kotreshhr left #gluster
20:10 kotreshhr joined #gluster
20:12 kotreshhr left #gluster
20:14 unclemarc joined #gluster
20:17 neofob joined #gluster
20:19 shaunm joined #gluster
20:57 mhulsman joined #gluster
21:01 haomaiwa_ joined #gluster
21:45 ron-slc joined #gluster
21:56 mhulsman joined #gluster
22:01 haomaiwa_ joined #gluster
23:01 haomaiwa_ joined #gluster
23:13 pdrakeweb joined #gluster
23:25 hagarth joined #gluster
23:42 jwd joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary