Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 DavidVargese joined #gluster
00:18 Akee1 joined #gluster
00:21 shortdudey123_ joined #gluster
00:21 cliluw joined #gluster
00:22 gildub joined #gluster
00:22 Larsen_ joined #gluster
00:22 portante joined #gluster
00:28 muneerse joined #gluster
00:28 mrrrgn_ joined #gluster
00:30 cabillman_ joined #gluster
00:30 paescuj joined #gluster
00:31 wonko2 joined #gluster
00:31 VeggieMeat_ joined #gluster
00:33 frankS2_ joined #gluster
00:36 n-st_ joined #gluster
00:39 PatNarciso joined #gluster
00:40 beeradb joined #gluster
00:42 ackjewt joined #gluster
00:42 RobertLaptop joined #gluster
00:42 milkyline_ joined #gluster
00:43 billputer joined #gluster
01:08 beeradb joined #gluster
01:15 baojg joined #gluster
01:23 DV joined #gluster
01:38 baojg joined #gluster
01:45 Lee1092 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 plarsen joined #gluster
02:10 qubozik joined #gluster
02:14 17SADKRRJ joined #gluster
02:30 ilbot3 joined #gluster
02:30 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:31 rafi joined #gluster
02:33 RobertLaptop joined #gluster
02:34 ackjewt joined #gluster
02:35 calisto joined #gluster
02:43 dgandhi joined #gluster
02:43 haomaiwa_ joined #gluster
02:52 haomaiwa_ joined #gluster
02:57 DV joined #gluster
03:01 haomaiwa_ joined #gluster
03:04 TheSeven joined #gluster
03:09 pppp joined #gluster
03:22 baojg joined #gluster
03:26 shubhendu joined #gluster
03:35 nishanth joined #gluster
03:38 legreffier joined #gluster
03:41 atinm joined #gluster
03:50 Bhaskarakiran joined #gluster
03:58 itisravi joined #gluster
03:58 haomaiwa_ joined #gluster
04:01 haomaiwang joined #gluster
04:04 kanagaraj joined #gluster
04:07 pppp joined #gluster
04:07 kdhananjay joined #gluster
04:15 RameshN joined #gluster
04:23 yazhini joined #gluster
04:27 overclk joined #gluster
04:31 rafi joined #gluster
04:32 neha joined #gluster
04:35 itisravi joined #gluster
04:37 qubozik joined #gluster
04:40 baojg joined #gluster
04:40 gem joined #gluster
04:46 ramteid joined #gluster
04:46 ndarshan joined #gluster
04:53 vimal joined #gluster
04:53 jiffin joined #gluster
04:59 atalur joined #gluster
05:01 haomaiwa_ joined #gluster
05:11 Manikandan joined #gluster
05:16 skoduri joined #gluster
05:20 poornimag joined #gluster
05:22 Bhaskarakiran joined #gluster
05:24 Bhaskarakiran joined #gluster
05:24 ramky joined #gluster
05:25 itisravi_ joined #gluster
05:27 R0ok_ joined #gluster
05:28 hgowtham joined #gluster
05:35 ppai joined #gluster
05:36 vmallika joined #gluster
05:36 vmallika joined #gluster
05:46 ashiq joined #gluster
05:49 hgowtham joined #gluster
05:50 nishanth joined #gluster
06:00 shubhendu joined #gluster
06:02 haomaiwang joined #gluster
06:06 EinstCrazy joined #gluster
06:11 deepakcs joined #gluster
06:13 raghu joined #gluster
06:15 rjoseph joined #gluster
06:19 sankarshan_away joined #gluster
06:22 jtux joined #gluster
06:23 mhulsman joined #gluster
06:23 onorua joined #gluster
06:27 ctria joined #gluster
06:31 Saravana_ joined #gluster
06:32 rgustafs joined #gluster
06:35 nbalacha joined #gluster
06:35 jwd joined #gluster
06:36 shubhendu joined #gluster
06:36 free_amitc_ joined #gluster
06:36 maveric_amitc_ joined #gluster
06:37 pppp joined #gluster
06:41 nishanth joined #gluster
06:41 muneerse2 joined #gluster
06:42 ppp joined #gluster
06:47 nangthang joined #gluster
06:49 ctria joined #gluster
06:51 amitc__ joined #gluster
06:51 atalur joined #gluster
06:54 David_Vargese joined #gluster
07:00 schandra joined #gluster
07:02 haomaiwa_ joined #gluster
07:04 qubozik joined #gluster
07:07 [Enrico] joined #gluster
07:16 jcastill1 joined #gluster
07:20 itisravi joined #gluster
07:20 fsimonce joined #gluster
07:22 jcastillo joined #gluster
07:22 prg3 joined #gluster
07:24 haomaiwa_ joined #gluster
07:24 free_amitc_ joined #gluster
07:24 RobertLaptop joined #gluster
07:25 David_Vargese joined #gluster
07:25 vmallika joined #gluster
07:25 ramteid joined #gluster
07:25 gem joined #gluster
07:25 rafi joined #gluster
07:25 dgandhi joined #gluster
07:25 ackjewt joined #gluster
07:27 gorfel joined #gluster
07:30 gorfel I have created a distributed-replicated 2x2=4 volume where the first brick already has data on it. This data is not replicated nor seen on the client. Is there a way to force glusterfs to "scan" the data bearing brick to index the files and replicated/rebalance it afterwards?
07:34 pppp joined #gluster
07:34 atalur joined #gluster
07:40 streppel i replaced the NIC in node2 and it works perfectly now. thanks for helping out :)
07:41 Manikandan joined #gluster
07:43 Pupeno joined #gluster
07:49 arcolife joined #gluster
08:01 haomaiwang joined #gluster
08:03 yazhini joined #gluster
08:08 Bhaskarakiran joined #gluster
08:11 RedW joined #gluster
08:21 LebedevRI joined #gluster
08:23 Slashman joined #gluster
08:24 onorua joined #gluster
08:25 [Enrico] joined #gluster
08:27 Pupeno joined #gluster
08:40 haomaiwa_ joined #gluster
08:48 Saravana_ joined #gluster
08:53 kbyrne joined #gluster
08:57 ctria joined #gluster
09:02 haomaiwa_ joined #gluster
09:04 anti[Enrico] joined #gluster
09:18 PatNarciso joined #gluster
09:19 ctria joined #gluster
09:20 spalai joined #gluster
09:27 Manikandan joined #gluster
09:34 Bhaskarakiran joined #gluster
09:34 poornimag joined #gluster
09:34 skoduri joined #gluster
09:35 astilla joined #gluster
09:37 anil joined #gluster
09:40 muneerse joined #gluster
09:44 ashiq joined #gluster
09:47 ctria joined #gluster
09:47 jcastill1 joined #gluster
09:47 deni joined #gluster
09:47 deni hi
09:47 glusterbot deni: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:48 baojg joined #gluster
09:49 deni I'm using libgfapi (python bindings to be more specific) and am deploying a gluster server with 2 replicas. with the native (fuse) client the failover mechanics were automatic but I'm not sure if this is the case when using the api. Can someone point me in the right direction of docs about this?
09:49 deni or if there aren't any docs (becuase I'm failing to find them) can you tell me how the failover should work?
09:52 jcastillo joined #gluster
09:58 Bhaskarakiran joined #gluster
09:58 MaxGashkov joined #gluster
09:59 atinm deni, contact ppai
09:59 deni ppai: ?
09:59 pppp joined #gluster
09:59 ppai deni, yes it is automatic just like the fuse client
10:00 deni ppai: even though i specify host and port when connecting to the "main" server via the python bindings?
10:00 DV joined #gluster
10:01 ppai deni, that is correct
10:01 pppp joined #gluster
10:01 haomaiwa_ joined #gluster
10:02 ppai deni, the host and port should be up when you do the virtual mount
10:02 deni ppai: I see...but later if that host dies it will know how to connect to either of the remaining ones
10:03 pppp joined #gluster
10:05 ppai deni, the failover is expected to work seamlessly at later part during I/O.
10:06 ppai deni, the initial host and port you specify is only used initially to fetch volume information and virtual mount it
10:06 pppp joined #gluster
10:06 deni ppai: I see. That makes sense. Thanks for help!
10:07 rarrr joined #gluster
10:07 kdhananjay *** REMINDER: Weekly Gluster Community Meeting starts in #gluster-meeting on Freenode in 2 hours from now ***
10:10 rarrr on gluster 3.3, after enabling quota on a volume (with gluster  volume quota * enable), and configure quota on a directory inside of that volume (with gluster volume quota * limit-usage <relative_path_to_the_volume_root> ...), although I see the quota configuration with gluster volume quota * list, the quota is not being enforce (I can write whatever the size I want there), what could be causing this?
10:12 pppp joined #gluster
10:16 jwaibel joined #gluster
10:17 pppp joined #gluster
10:19 pppp joined #gluster
10:21 rafi rarrr: gluster 3.3 is abit old version, gluster is not supporting v.3.x
10:21 rafi rarrr: I would recommend you to upgrade
10:21 rarrr rafi, thanks
10:21 rafi Manikandan: ^
10:22 Manikandan rafi,
10:22 rafi rarrr: by the way, do you have any thing to share from the log
10:23 muneerse2 joined #gluster
10:27 rarrr rafi, I've found that the issue is related to the hard-timeout, if you write files too fast, the default hard-timeout is too high. Setting it to a lower value makes the trick
10:28 rarrr rafi, thanks for your help
10:28 rafi rarrr: cool
10:28 rafi rarrr: I guess the default time out is 60 seconds
10:29 rafi rarrr: but setting it into a lower value will make the file system to do crawl on the directory very frequently
10:30 rafi rarrr: I'm not sure, just guessing
10:30 rarrr rafi, in 2.x it was 5s
10:30 rarrr rafi, k
10:31 baojg joined #gluster
10:33 harish joined #gluster
10:35 Bhaskarakiran joined #gluster
10:36 Manikandan joined #gluster
10:36 ccoffey joined #gluster
10:38 Apeksha joined #gluster
10:39 pkoro joined #gluster
10:39 ccoffey Hello all. I have a volume with 8 peers. I have 1 peer,x, I need to remove and re-add, and at the same time, I have issues on another peer, y. volume status returns " Locking failed on y..." and no more. Am I correct in assumuing I need to fix that issues on y first before I can do the management on x ?
10:43 natarej_ joined #gluster
10:45 ramky joined #gluster
11:01 haomaiwa_ joined #gluster
11:15 Saravana_ joined #gluster
11:15 ccoffey i.e. I'd like to do a peer detach and attach before fixing the issue that relating to the locking failed
11:21 firemanxbr joined #gluster
11:28 julim joined #gluster
11:34 ira joined #gluster
11:38 haomaiwa_ joined #gluster
11:38 DV__ joined #gluster
11:49 rgustafs joined #gluster
11:50 DV joined #gluster
11:50 TheCthulhu3 joined #gluster
11:51 ramky joined #gluster
11:51 spalai joined #gluster
11:52 jdarcy joined #gluster
11:52 B21956 joined #gluster
11:55 TheCthulhu joined #gluster
11:57 amye joined #gluster
11:58 poornimag joined #gluster
12:01 ndevos *REMINDER* Gluster Community Meeting starts now in #gluster-meeting
12:01 haomaiwang joined #gluster
12:12 nishanth joined #gluster
12:14 shubhendu joined #gluster
12:19 jtux joined #gluster
12:20 unclemarc joined #gluster
12:21 ppai joined #gluster
12:22 EinstCrazy joined #gluster
12:26 mhulsman1 joined #gluster
12:27 chan5n joined #gluster
12:28 DV__ joined #gluster
12:30 qubozik joined #gluster
12:43 sage joined #gluster
12:49 VeggieMeat joined #gluster
12:51 sage joined #gluster
12:52 klaxa|work joined #gluster
12:59 shubhendu joined #gluster
13:01 papamoose1 joined #gluster
13:02 spalai left #gluster
13:06 mhulsman joined #gluster
13:10 bennyturns joined #gluster
13:11 calisto joined #gluster
13:11 sage joined #gluster
13:14 pbanh joined #gluster
13:20 DV joined #gluster
13:26 unclemarc joined #gluster
13:26 pbanh joined #gluster
13:28 qubozik joined #gluster
13:29 paescuj hi there. one of our brick has gone offline when we issued the following command: gluster volume top <volname> clear. has anybody experienced the same before?
13:29 DavidVargese joined #gluster
13:32 qubozik_ joined #gluster
13:34 ekuric joined #gluster
13:47 dgandhi joined #gluster
13:53 pblyead joined #gluster
14:00 DV joined #gluster
14:02 DV__ joined #gluster
14:03 plarsen joined #gluster
14:04 pppp joined #gluster
14:05 dlambrig joined #gluster
14:12 shyam joined #gluster
14:14 Pupeno joined #gluster
14:20 amye joined #gluster
14:23 ira joined #gluster
14:25 harold joined #gluster
14:25 beeradb joined #gluster
14:25 _maserati joined #gluster
14:30 rjoseph joined #gluster
14:31 qubozik joined #gluster
14:31 neofob joined #gluster
14:35 NEOhidra joined #gluster
14:36 nbalacha joined #gluster
14:36 julim joined #gluster
14:39 haomaiwa_ joined #gluster
14:48 Pupeno joined #gluster
14:50 deepakcs joined #gluster
14:53 NEOhidra left #gluster
14:54 jcastill1 joined #gluster
14:55 shubhendu joined #gluster
14:58 qubozik_ joined #gluster
14:59 jcastillo joined #gluster
15:00 haomaiwang joined #gluster
15:01 haomaiwang joined #gluster
15:08 nishanth joined #gluster
15:09 qubozik joined #gluster
15:11 JoeJulian paescuj: That would have generated a crash report in a log. Please use it to file a bug report.
15:11 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
15:11 beeradb_ joined #gluster
15:15 skoduri joined #gluster
15:21 pblyead_ joined #gluster
15:24 astilla left #gluster
15:26 pblyead joined #gluster
15:37 pblyead Hello there. One of my gluster volume is currently trying to heal multiple deleted files. The operation would fail because it no longer exists (failed (Stale file handle)). I was wondering if this happens often enough could it potentially trigger a kernel panic?
15:41 _shaps_ joined #gluster
15:43 togdon joined #gluster
15:45 JoeJulian Doesn't seem likely.
15:48 cholcombe joined #gluster
15:48 JoeJulian pblyead: You should be able to remove the gfid entries from .glusterfs/indices/xattrop on the brick(s) to make it stop trying to heal them.
15:50 _maserati find .glusterfs -type f -link 1 ?
15:58 wushudoin joined #gluster
16:00 zhangjn joined #gluster
16:01 17SADK50C joined #gluster
16:02 paescuj JoeJulian: No entries in the log files :( we're going to reproduce it tomorrow...
16:04 EinstCrazy joined #gluster
16:06 bennyturns joined #gluster
16:08 EinstCrazy joined #gluster
16:09 EinstCrazy joined #gluster
16:23 squizzi_ joined #gluster
16:28 saltsa joined #gluster
16:29 hagarth joined #gluster
16:37 beeradb_ joined #gluster
17:00 _maserati ah hell i forgot the command to see where my volume's geo replication is pointing?
17:01 _maserati unrecognized word: geo-replication  >.<
17:01 haomaiwa_ joined #gluster
17:06 _maserati Ok, so I got a geo-replication set up to a server that no longer exists. How can I rip the geo-replication out without that slave server online?
17:07 Rapture joined #gluster
17:10 neofob joined #gluster
17:11 qubozik joined #gluster
17:12 qubozik joined #gluster
17:25 veleno joined #gluster
17:26 veleno hello. i’m kind of new to gluster and glusterfs, but i’m trying to give it a spin and deploy on AWS. apart from http://www.gluster.org/community/documentation/index.php/Getting_started_setup_aws is there some well-known caveat that I should be aware of ?
17:53 mhulsman joined #gluster
18:04 marlinc joined #gluster
18:12 sixty4k_ joined #gluster
18:12 hagarth joined #gluster
18:53 papamoose joined #gluster
18:54 htrmeira joined #gluster
18:55 shortdudey123 joined #gluster
19:00 Pupeno joined #gluster
19:01 SOLDIERz joined #gluster
19:03 shortdudey123 joined #gluster
19:05 shortdudey123 joined #gluster
19:12 beeradb_ joined #gluster
19:37 mhulsman joined #gluster
20:03 bennyturns joined #gluster
20:24 PaulCuzner joined #gluster
20:25 cyberbootje joined #gluster
20:27 DV joined #gluster
20:31 veleno joined #gluster
20:32 PaulCuzner joined #gluster
20:42 mhulsman joined #gluster
21:03 jbrooks joined #gluster
21:39 _maserati_ joined #gluster
21:41 marlinc joined #gluster
22:07 DV__ joined #gluster
22:20 cliluw joined #gluster
22:28 badone joined #gluster
22:30 togdon joined #gluster
22:47 gildub joined #gluster
23:27 Pupeno joined #gluster
23:43 edwardm61 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary