Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 MrAbaddon joined #gluster
00:07 Prilly joined #gluster
00:10 trav-sj left #gluster
00:13 jvandewege joined #gluster
00:15 ninkotech__ joined #gluster
00:20 rjoseph|afk joined #gluster
00:31 Prilly joined #gluster
00:36 Prilly joined #gluster
00:45 swebb joined #gluster
01:12 kdhananjay joined #gluster
01:16 meghanam joined #gluster
01:21 PaulCuzner joined #gluster
01:21 nangthang joined #gluster
01:23 soumya joined #gluster
01:25 thangnn_ joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 kdhananjay joined #gluster
02:03 harish_ joined #gluster
02:05 Supermathie joined #gluster
02:22 DV_ joined #gluster
02:44 DV_ joined #gluster
02:54 soumya joined #gluster
02:56 rjoseph|afk joined #gluster
03:03 shubhendu joined #gluster
03:06 kdhananjay joined #gluster
03:07 DV_ joined #gluster
03:13 overclk joined #gluster
03:28 shubhendu joined #gluster
03:29 atinmu joined #gluster
03:35 itisravi joined #gluster
03:46 sakshi joined #gluster
03:47 nishanth joined #gluster
03:53 [7] joined #gluster
03:57 meghanam joined #gluster
03:59 meghanam joined #gluster
03:59 RameshN joined #gluster
04:00 kanagaraj joined #gluster
04:09 nbalacha joined #gluster
04:17 vimal joined #gluster
04:20 RameshN joined #gluster
04:37 poornimag joined #gluster
04:41 lexi2 joined #gluster
04:43 deepakcs joined #gluster
04:44 glusterbot News from newglusterbugs: [Bug 1219732] brick-op failure for glusterd command should log error message in cmd_history.log <https://bugzilla.redhat.com/show_bug.cgi?id=1219732>
04:47 gem joined #gluster
04:48 ppai joined #gluster
04:54 shubhendu joined #gluster
04:56 soumya joined #gluster
04:56 julim joined #gluster
04:59 karnan joined #gluster
05:00 anil joined #gluster
05:07 Apeksha joined #gluster
05:09 shubhendu_ joined #gluster
05:11 ndarshan joined #gluster
05:13 sripathi joined #gluster
05:14 lalatenduM joined #gluster
05:15 raghu joined #gluster
05:16 kshlm joined #gluster
05:17 bharata-rao joined #gluster
05:18 spandit joined #gluster
05:19 Debloper joined #gluster
05:19 Manikandan joined #gluster
05:20 gem_ joined #gluster
05:21 ashiq joined #gluster
05:31 hagarth joined #gluster
05:34 bharata_ joined #gluster
05:36 rafi joined #gluster
05:37 Telsin joined #gluster
05:39 spiekey joined #gluster
05:42 schandra joined #gluster
05:43 maveric_amitc_ joined #gluster
05:46 pppp joined #gluster
05:49 kdhananjay joined #gluster
05:54 jiffin joined #gluster
05:54 vimal joined #gluster
05:55 atalur joined #gluster
05:56 kumar joined #gluster
05:59 schandra joined #gluster
06:07 saurabh_ joined #gluster
06:15 glusterbot News from newglusterbugs: [Bug 1219744] [SNAPSHOT]: activate and deactivate doesn't do a handshake when a glusterd comes  back <https://bugzilla.redhat.com/show_bug.cgi?id=1219744>
06:15 jtux joined #gluster
06:16 anrao joined #gluster
06:27 autoditac joined #gluster
06:31 meghanam joined #gluster
06:31 hagarth joined #gluster
06:32 Philambdo joined #gluster
06:33 DV_ joined #gluster
06:33 Debloper joined #gluster
06:39 ashiq joined #gluster
06:39 kotreshhr joined #gluster
06:48 Iodun joined #gluster
06:55 SOLDIERz joined #gluster
06:57 soumya joined #gluster
06:58 nangthang joined #gluster
07:08 [Enrico] joined #gluster
07:11 atinmu :1
07:26 rafi joined #gluster
07:29 rafi1 joined #gluster
07:42 LebedevRI joined #gluster
07:44 soumya joined #gluster
07:44 ctria joined #gluster
07:47 fsimonce joined #gluster
08:02 hagarth joined #gluster
08:07 DV joined #gluster
08:10 _shaps_ joined #gluster
08:11 crashmag joined #gluster
08:16 al joined #gluster
08:23 aravindavk joined #gluster
08:24 MrAbaddon joined #gluster
08:45 sakshi joined #gluster
08:50 dusmant joined #gluster
08:53 soumya joined #gluster
08:54 spalai joined #gluster
08:56 Norky joined #gluster
08:56 deniszh joined #gluster
08:56 kdhananjay joined #gluster
08:56 anrao joined #gluster
09:06 poornimag joined #gluster
09:08 ws2k3 joined #gluster
09:28 meghanam joined #gluster
09:30 kovshenin joined #gluster
09:32 ashiq joined #gluster
09:34 hgowtham joined #gluster
09:36 nsoffer joined #gluster
09:36 sdb_ joined #gluster
09:44 hgowtham joined #gluster
09:45 glusterbot News from newglusterbugs: [Bug 1219787] package glupy as a subpackage under gluster namespace. <https://bugzilla.redhat.com/show_bug.cgi?id=1219787>
09:52 autoditac joined #gluster
09:54 shubhendu__ joined #gluster
09:54 kotreshhr joined #gluster
10:01 Manikandan joined #gluster
10:02 kovsheni_ joined #gluster
10:06 ira joined #gluster
10:07 kshlm joined #gluster
10:07 ira joined #gluster
10:08 kovshenin joined #gluster
10:08 atinmu joined #gluster
10:08 shubhendu_ joined #gluster
10:08 MrAbaddon joined #gluster
10:09 dusmant joined #gluster
10:09 nishanth joined #gluster
10:09 hagarth joined #gluster
10:10 atalur joined #gluster
10:11 poornimag joined #gluster
10:15 kovshenin joined #gluster
10:20 kovshenin joined #gluster
10:21 eljrax Is glupy still a thing? All I can find dates back to 2013
10:23 hagarth eljrax: yes, that is supported
10:23 kovshenin joined #gluster
10:24 eljrax Ok, grand. Good to know, thanks!
10:26 rafi joined #gluster
10:31 kovshenin joined #gluster
10:35 nsoffer joined #gluster
10:38 nishanth joined #gluster
10:38 nbalacha joined #gluster
10:38 rafi joined #gluster
10:49 stickyboy joined #gluster
10:51 kovshenin joined #gluster
10:57 kovshenin joined #gluster
11:00 kovsheni_ joined #gluster
11:00 meghanam joined #gluster
11:02 harish_ joined #gluster
11:03 kovshen__ joined #gluster
11:06 kovshenin joined #gluster
11:07 hgowtham joined #gluster
11:08 atalur joined #gluster
11:12 poornimag joined #gluster
11:14 autoditac joined #gluster
11:15 kovshenin joined #gluster
11:21 soumya joined #gluster
11:22 bene2 joined #gluster
11:24 kovshenin joined #gluster
11:26 atinmu joined #gluster
11:26 DV joined #gluster
11:31 kotreshhr joined #gluster
11:32 ppai joined #gluster
11:41 jcastill1 joined #gluster
11:42 rafi1 joined #gluster
11:44 sabansal_ joined #gluster
11:44 schandra joined #gluster
11:45 Arminder joined #gluster
11:46 hagarth joined #gluster
11:46 glusterbot News from newglusterbugs: [Bug 1219823] [georep]: Creating geo-rep session kills all the brick process <https://bugzilla.redhat.com/show_bug.cgi?id=1219823>
11:46 jcastillo joined #gluster
11:48 anrao joined #gluster
11:51 jmarley joined #gluster
11:55 overclk joined #gluster
11:58 rafi joined #gluster
11:59 morse joined #gluster
12:00 rafi1 joined #gluster
12:12 kovsheni_ joined #gluster
12:17 kovshenin joined #gluster
12:18 hagarth joined #gluster
12:19 itisravi joined #gluster
12:29 kovsheni_ joined #gluster
12:30 anrao joined #gluster
12:31 kovshenin joined #gluster
12:33 jcastill1 joined #gluster
12:38 jcastillo joined #gluster
12:38 pdrakeweb joined #gluster
12:39 kovsheni_ joined #gluster
12:40 rafi joined #gluster
12:42 kovshenin joined #gluster
12:45 kovshenin joined #gluster
12:46 spalai left #gluster
12:46 kovshenin joined #gluster
12:53 bennyturns joined #gluster
12:54 kovshenin joined #gluster
12:55 firemanxbr joined #gluster
12:56 nbalacha joined #gluster
12:57 kovshenin joined #gluster
12:57 rafi1 joined #gluster
12:59 kovshenin joined #gluster
13:00 ppai joined #gluster
13:00 anrao joined #gluster
13:01 kovshenin joined #gluster
13:06 soumya joined #gluster
13:07 overclk joined #gluster
13:09 kovshenin joined #gluster
13:10 gem joined #gluster
13:10 hchiramm joined #gluster
13:11 kovsheni_ joined #gluster
13:15 kovshenin joined #gluster
13:16 glusterbot News from newglusterbugs: [Bug 1219848] Directories are missing on the mount point after attaching tier to distribute replicate volume. <https://bugzilla.redhat.com/show_bug.cgi?id=1219848>
13:16 glusterbot News from newglusterbugs: [Bug 1219842] [RFE] Data Tiering:Need a way from CLI to identify hot and cold tier bricks easily <https://bugzilla.redhat.com/show_bug.cgi?id=1219842>
13:16 glusterbot News from newglusterbugs: [Bug 1219843] [RFE] Data Tiering:Need a way from CLI to identify hot and cold tier bricks easily <https://bugzilla.redhat.com/show_bug.cgi?id=1219843>
13:16 glusterbot News from newglusterbugs: [Bug 1219845] tiering: cksum mismach for tiered volume. <https://bugzilla.redhat.com/show_bug.cgi?id=1219845>
13:16 glusterbot News from newglusterbugs: [Bug 1219846] Data Tiering: glusterd(management) communication issues seen on tiering setup <https://bugzilla.redhat.com/show_bug.cgi?id=1219846>
13:18 nbalacha joined #gluster
13:19 kovsheni_ joined #gluster
13:26 theron joined #gluster
13:27 georgeh-LT2 joined #gluster
13:27 anrao joined #gluster
13:29 Bardack joined #gluster
13:29 kovshenin joined #gluster
13:31 Bardack joined #gluster
13:34 dgandhi joined #gluster
13:37 anrao_ ndevos ++
13:37 anrao_ ndevos++
13:37 glusterbot anrao_: ndevos's karma is now 14
13:37 hamiller joined #gluster
13:38 ndevos anrao++
13:38 glusterbot ndevos: anrao's karma is now 1
13:39 squizzi joined #gluster
13:43 kovsheni_ joined #gluster
13:46 glusterbot News from newglusterbugs: [Bug 1219850] Data Tiering: attaching a tier with non supported replica count crashes glusterd on local host <https://bugzilla.redhat.com/show_bug.cgi?id=1219850>
13:51 plarsen joined #gluster
13:54 atalur joined #gluster
13:59 lpabon joined #gluster
14:00 kovshenin joined #gluster
14:08 ghenry joined #gluster
14:09 shaunm joined #gluster
14:12 eljrax I just brought another server up, and went from 1x2 to 1x3, but the syncing of the brick is dead slow. Like 300 kb/s. Is this intentional to save resource, and we rely on on-demand healing ?
14:17 kovshenin joined #gluster
14:20 wushudoin joined #gluster
14:24 kovshenin joined #gluster
14:27 kovshenin joined #gluster
14:31 rafi joined #gluster
14:31 soumya joined #gluster
14:32 kovsheni_ joined #gluster
14:34 jobewan joined #gluster
14:35 kovshenin joined #gluster
14:36 squizzi joined #gluster
14:39 kovshenin joined #gluster
14:41 kovshenin joined #gluster
14:47 jobewan joined #gluster
14:51 kovshenin joined #gluster
14:56 meghanam joined #gluster
14:59 kovsheni_ joined #gluster
15:07 georgeh-LT2 joined #gluster
15:10 Arminder joined #gluster
15:10 diphen joined #gluster
15:11 Arminder joined #gluster
15:11 kovshenin joined #gluster
15:12 Arminder joined #gluster
15:13 Arminder joined #gluster
15:14 diphen is it possible to deploy glusterfs in a hub-and-spoke replica topology, and all bricks mounted on a client as a single mount/directory structure?
15:16 Arminder joined #gluster
15:17 Arminder joined #gluster
15:17 diphen e.g., spoke{1,2}:/data/spoke{1,2} -> hub:/data/spoke{1,2} -> client:/data
15:18 Arminder joined #gluster
15:19 diphen problem is, i need the root of both spokes to be in client:/data, and now have spoke{1,2} as different mount points on the client
15:19 diphen s/now/not
15:20 atalur joined #gluster
15:23 hagarth joined #gluster
15:23 kdhananjay joined #gluster
15:24 diphen or, possibly have two spokes do one-way replication to a single volume/root on the hub?
15:25 kovsheni_ joined #gluster
15:30 stickyboy joined #gluster
15:38 Pupeno joined #gluster
15:41 kdhananjay left #gluster
15:43 lalatenduM joined #gluster
15:44 cholcombe joined #gluster
15:44 soumya joined #gluster
15:45 nbalacha joined #gluster
15:46 doekia joined #gluster
15:55 Arminder joined #gluster
16:00 Pupeno joined #gluster
16:01 jobewan joined #gluster
16:09 kotreshhr left #gluster
16:35 ale joined #gluster
16:36 ale Hello
16:36 glusterbot ale: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:38 Rapture joined #gluster
16:38 DV joined #gluster
16:39 poornimag joined #gluster
16:41 ale I have a problem with gluster. We deployed two servers running gluster, and we wrote a very large amount of small files in it, organized in directories. We have about 10000K directories right now. Write operations are triggered by a client (glusterfs version 3.4.4), and it wrotes only in a subset of the directories, depending on the nature of incoming data. The problem is that after a while, memory
16:42 ale consumption goes up to 25GB. I tried the command "echo 3 /proc/sys/vm/drop_caches" but it does not help. Is this a known issue about 3.4.x version of gluster? In case, has it been solved in new versions?
16:43 bennyturns ale, 3.4 is pretty old there could be a mem leak or other problem.  Are you running into swap?
16:43 bennyturns any thing in messages about OOMs?
16:43 atinmu joined #gluster
16:44 ale bennyturns, thanks for the reply. Luckily I'm not into swap, since the client is running on a server with enough memory. It doesn't look like there is any message about OOM
16:44 bennyturns ale, running into swap, ie look in top and seee if swap is used, check the /var/log/messages file for errors about out of memory
16:45 ale No error in logs, I confirm it! I just re-checked it...
16:48 bennyturns ale, until you start hitting swap you should be fine, I would monitor.  I don't know the 3.4x versions well enough to know if any versions were problematic
16:49 bennyturns I gotta run though I'm sure someone will have some better advise for ya
16:49 ale benny, the server is deployed to do several other tasks. Thus, even if we are not in swap right now, I cannot be sure that I won't during the night... that's not a good situation :(
16:49 ale ok, thank you for your time!
16:53 ale quit
17:09 jcastill1 joined #gluster
17:14 jcastillo joined #gluster
17:23 kumar joined #gluster
17:43 pdrakewe_ joined #gluster
17:48 RameshN joined #gluster
17:59 itspete joined #gluster
18:03 itspete I've got an interesting situation and I hope someone will be able to point me in the right direction...  I have a Gluster volume with a single brick with about 1.6TB of data in it, however the du is reporting that the folder in which the brick resides is using up 3.3TB and I am at a loss as to where that extra 1.7TB is hiding.  Is it possible that the .glusterfs folder is somehow corrupt and...
18:03 itspete ...if so, can I safely remove it and count on Gluster to rebuild it?
18:18 rafi joined #gluster
18:34 ndevos itspete: the files unders the .glusterfs directory should be hardlinked to the real contents on the brick, re-creating it is rather an expensive operation
18:34 ndevos itspete: however, you could check if there are files in that directory that have only one link in the inode
18:34 ndevos itspete: someting like this would show them: find /path/to/brick/.glusterfs -type f -links 1
18:35 itspete Thanks, I'll give that a go
18:35 ndevos itspete: write the output to a file, check if the files listed really are like the GFID files (not indexes or something) and you should be able to delete them
18:35 itspete Cool, thanks
18:38 Slashman joined #gluster
18:53 lalatenduM joined #gluster
18:53 georgeh-LT2 joined #gluster
18:56 Rapture joined #gluster
19:04 jiffin joined #gluster
19:17 georgeh-LT2 joined #gluster
19:23 jiffin joined #gluster
19:40 rafi1 joined #gluster
19:48 glusterbot News from newglusterbugs: [Bug 1219951] Should not migrate linkto and commit-hash xattrs on tier promotion/demotion <https://bugzilla.redhat.com/show_bug.cgi?id=1219951>
19:48 glusterbot News from newglusterbugs: [Bug 1219953] The python-gluster package should be 'noarch' <https://bugzilla.redhat.com/show_bug.cgi?id=1219953>
19:48 glusterbot News from newglusterbugs: [Bug 1219954] The python-gluster package should be 'noarch' <https://bugzilla.redhat.com/show_bug.cgi?id=1219954>
20:07 shaunm joined #gluster
20:27 DV joined #gluster
20:31 rafi joined #gluster
20:36 rafi joined #gluster
21:23 rafi joined #gluster
21:40 Pupeno joined #gluster
21:47 amanjain110893 joined #gluster
21:50 glusterbot News from resolvedglusterbugs: [Bug 1208452] installing client packages or glusterfs-api requires libgfdb.so <https://bugzilla.redhat.com/show_bug.cgi?id=1208452>
21:52 amanjain110893 i am newbie here, i want to see my chat history how can I do that ?
21:53 JoeJulian Most IRC clients have their own way of doing that. Beyond that, if you look at the topic there are two different channel loggers we use to allow you to search back.
21:53 JoeJulian ie. /topic
21:54 shaunm joined #gluster
21:55 amanjain110893 got it here https://botbot.me/freenode/gluster-dev/
22:07 rafi joined #gluster
22:16 jackdpeterson joined #gluster
22:17 jobewan joined #gluster
22:18 glusterbot News from newglusterbugs: [Bug 1165938] Fix regression test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1165938>
22:20 glusterbot News from resolvedglusterbugs: [Bug 1219843] [RFE] Data Tiering:Need a way from CLI to identify hot and cold tier bricks easily <https://bugzilla.redhat.com/show_bug.cgi?id=1219843>
22:21 jiffin1 joined #gluster
22:23 Pupeno joined #gluster
22:24 corretico joined #gluster
22:35 jiffin joined #gluster
22:44 julim joined #gluster
23:38 redbeard joined #gluster
23:50 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary