Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 edong23_ coredumb: what is nfs-ganesha?
00:10 elico joined #gluster
00:38 gildub joined #gluster
00:40 gomikemike back
00:45 _Bryan_ joined #gluster
00:53 elico joined #gluster
01:38 kdhananjay joined #gluster
01:50 doubt01 joined #gluster
01:51 doubt01 hi guys, any one can lead me to explaination about bricks? bricks can be loopdevice or a raw disk partition/device? which one is better?
02:03 plarsen joined #gluster
02:03 haomaiwa_ joined #gluster
02:06 haomaiw__ joined #gluster
02:09 harish joined #gluster
02:12 haomaiwa_ joined #gluster
02:18 haomai___ joined #gluster
02:27 haomaiwa_ joined #gluster
02:33 haomaiw__ joined #gluster
02:38 wgao joined #gluster
02:44 hagarth joined #gluster
02:49 kdhananjay joined #gluster
02:53 bharata-rao joined #gluster
02:53 kshlm joined #gluster
02:55 wgao joined #gluster
03:05 primemin1sterp joined #gluster
03:10 rejy joined #gluster
03:11 nbalachandran joined #gluster
03:15 nbalachandran joined #gluster
03:39 shubhendu joined #gluster
03:43 jobewan joined #gluster
03:46 kanagaraj joined #gluster
03:56 RameshN joined #gluster
03:57 gildub joined #gluster
04:19 jobewan joined #gluster
04:26 ekuric joined #gluster
04:29 kumar joined #gluster
04:30 atinmu joined #gluster
04:32 lalatenduM joined #gluster
04:35 anoopcs joined #gluster
04:35 meghanam joined #gluster
04:37 RameshN joined #gluster
04:38 rafi1 joined #gluster
04:38 Rafi_kc joined #gluster
04:46 spandit joined #gluster
04:47 ndarshan joined #gluster
04:49 elico joined #gluster
04:51 dusmant joined #gluster
04:57 mariusp joined #gluster
05:03 dusmant joined #gluster
05:09 kshlm joined #gluster
05:11 aravindavk joined #gluster
05:13 ramteid joined #gluster
05:16 prasanth_ joined #gluster
05:25 hagarth joined #gluster
05:28 raghu joined #gluster
05:30 sputnik13 joined #gluster
05:38 zerick joined #gluster
05:41 mariusp joined #gluster
05:41 shubhendu joined #gluster
05:42 RaSTar joined #gluster
05:46 kdhananjay joined #gluster
05:57 soumya__ joined #gluster
06:11 jiffin joined #gluster
06:14 glusterbot New news from newglusterbugs: [Bug 1143835] dht crashed on running regression with floating point exception <https://bugzilla.redhat.co​m/show_bug.cgi?id=1143835> || [Bug 1038866] [FEAT] command to rename peer hostname <https://bugzilla.redhat.co​m/show_bug.cgi?id=1038866>
06:14 ppai joined #gluster
06:21 nshaikh joined #gluster
06:22 glusterbot New news from resolvedglusterbugs: [Bug 1121822] Cmockery2 is being linked against gluster applications <https://bugzilla.redhat.co​m/show_bug.cgi?id=1121822> || [Bug 1049470] Gluster could do with a useful cli utility for updating host definitions <https://bugzilla.redhat.co​m/show_bug.cgi?id=1049470>
06:31 itisravi joined #gluster
06:34 itisravi_ joined #gluster
06:35 elico joined #gluster
06:42 atalur joined #gluster
06:45 rgustafs joined #gluster
06:52 glusterbot New news from resolvedglusterbugs: [Bug 991035] ACL mask is calculated incorrectly <https://bugzilla.redhat.com/show_bug.cgi?id=991035> || [Bug 998967] gluster 3.4.0 ACL returning different results with entity-timeout=0 and without <https://bugzilla.redhat.com/show_bug.cgi?id=998967>
07:00 doekia joined #gluster
07:01 DJClean joined #gluster
07:01 DJClean joined #gluster
07:12 doekia joined #gluster
07:14 overclk joined #gluster
07:18 hagarth joined #gluster
07:18 gehaxelt joined #gluster
07:24 atinmu joined #gluster
07:25 deepakcs joined #gluster
07:34 tty00 joined #gluster
07:37 natgeorg joined #gluster
07:43 Intensity joined #gluster
07:44 Pupeno joined #gluster
07:48 wgao joined #gluster
07:49 doekia joined #gluster
07:53 saurabh joined #gluster
08:00 gehaxelt joined #gluster
08:00 SteveCooling joined #gluster
08:00 Thilam joined #gluster
08:00 Thilam hi there
08:01 Thilam I have a quite big problem with my gluster install, version 3.5.1
08:01 Thilam 3 hosts, 3 bricks
08:01 Thilam debian 6
08:01 Thilam yesterday I tried to remove a brick
08:02 liquidat joined #gluster
08:02 Thilam this morning the command was successful
08:02 Thilam but !
08:02 Thilam when I make a gluster info <my_vol>
08:03 Thilam the brick is still there
08:03 karnan joined #gluster
08:03 Thilam when I launch the remove command again it told me :
08:03 Thilam volume remove-brick commit: failed: Incorrect brick projet3:/glusterfs/projets-brick3/projet for volume projets
08:03 Thilam and when I tried to remove the host from the cluster : peer detach: failed: Brick(s) with the peer projet3 exist in cluster
08:04 Thilam and to finish, gluster volume status :
08:04 Thilam Brick projet1:/glusterfs/projets-brick1/projets         49155   Y       3394
08:04 Thilam Brick projet2:/glusterfs/projets-brick2/projets         49154   Y       2791
08:04 Thilam Brick projet3:/glusterfs/projets-brick3/projets         N/A     N       N/A
08:04 Thilam NFS Server on localhost                                 2049    Y       26879
08:04 Thilam NFS Server on projet2                                   2049    Y       27974
08:04 Thilam NFS Server on projet3                                   2049    Y       8255
08:04 Thilam Task Status of Volume projets
08:04 Thilam ---------------------------------------​---------------------------------------
08:04 Thilam Task                 : Remove brick
08:04 Thilam ID                   : 7f0a960b-2c0a-4363-a140-c986e5858398
08:04 Thilam Removed bricks:
08:04 Thilam projet3:/glusterfs/projets-brick3/projets
08:04 Thilam Status               : completed
08:04 glusterbot Thilam: ---------------------------------------​-------------------------------------'s karma is now -1
08:05 Thilam so I'm lost :)
08:06 Thilam I just want to remove this %% brick and this %% host from the gluster FS
08:09 Thilam ndevos?
08:11 elico joined #gluster
08:11 atinmu joined #gluster
08:15 glusterbot New news from newglusterbugs: [Bug 1143880] [FEAT] Exports and Netgroups Authentication for Gluster NFS mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1143880> || [Bug 1143886] when brick is down, rdma fuse mounting hangs for volumes with tcp,rdma as transport. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1143886>
08:15 elico joined #gluster
08:17 elico joined #gluster
08:17 hagarth joined #gluster
08:20 Thilam no one could help me with my brick removal issue ?
08:30 Slashman joined #gluster
08:30 bjornar joined #gluster
08:31 Philambdo joined #gluster
08:37 elico joined #gluster
08:39 elico joined #gluster
08:42 ricky-ticky joined #gluster
08:45 glusterbot New news from newglusterbugs: [Bug 1143905] Brick still there after removal <https://bugzilla.redhat.co​m/show_bug.cgi?id=1143905>
08:45 navid__ joined #gluster
08:45 elico joined #gluster
08:55 vimal joined #gluster
08:58 pkoro joined #gluster
08:59 soumya joined #gluster
09:02 Pupeno joined #gluster
09:02 elico joined #gluster
09:05 elico joined #gluster
09:09 elico joined #gluster
09:14 soumya joined #gluster
09:21 atinmu joined #gluster
09:27 mariusp joined #gluster
09:29 elico joined #gluster
09:40 harish joined #gluster
09:42 soumya joined #gluster
09:46 jmarley joined #gluster
09:46 ndevos Thilam: I think you need to 'remove-brick $BRICK commit' to make it final
09:47 ppai joined #gluster
09:47 ndevos I dont do it that often, so I'm only trying to remember things
09:52 Thilam commit didn't work
09:53 Thilam brick goes off on the server
09:53 Thilam I made a volume start force
09:53 Thilam then status went failed
09:54 Thilam then I made the commit and finally the brick have been removed
09:55 Thilam but I still have some strange behaviour through cifs mounts
10:05 lalatenduM joined #gluster
10:05 sputnik13 joined #gluster
10:08 pkoro joined #gluster
10:13 rjoseph joined #gluster
10:18 hagarth left #gluster
10:21 atinmu joined #gluster
10:26 ndarshan joined #gluster
10:30 ppai joined #gluster
10:39 bene2 joined #gluster
10:44 gildub joined #gluster
10:45 pkoro joined #gluster
10:45 glusterbot New news from newglusterbugs: [Bug 1143961] [USS]: accessing snapshots via uss creating problems <https://bugzilla.redhat.co​m/show_bug.cgi?id=1143961>
10:47 edward1 joined #gluster
10:56 kkeithley1 joined #gluster
11:02 nbalachandran joined #gluster
11:04 nbalachandran_ joined #gluster
11:06 mariusp joined #gluster
11:09 soumya joined #gluster
11:11 DV joined #gluster
11:18 overclk joined #gluster
11:19 chirino joined #gluster
11:20 hagarth joined #gluster
11:33 ppai joined #gluster
11:35 diegows joined #gluster
11:51 aravindavk joined #gluster
11:51 soumya_ joined #gluster
11:55 overclk joined #gluster
11:55 LebedevRI joined #gluster
11:57 LHinson joined #gluster
12:01 julim joined #gluster
12:01 Slashman_ joined #gluster
12:02 meghanam joined #gluster
12:06 bennyturns joined #gluster
12:07 kanagaraj joined #gluster
12:07 hagarth joined #gluster
12:13 calum_ joined #gluster
12:14 ws2k333 joined #gluster
12:16 aravindavk joined #gluster
12:18 soumya_ joined #gluster
12:19 hchiramm_ joined #gluster
12:28 ppai joined #gluster
12:40 soumya joined #gluster
12:47 bene2 joined #gluster
12:51 Philambdo joined #gluster
12:57 necrogami joined #gluster
13:01 aravindavk joined #gluster
13:07 coredump joined #gluster
13:13 blu_ joined #gluster
13:14 ppai joined #gluster
13:15 blu_ Hi all, I am attempting to NFS mount a glusterfs volume and getting a permission denied error.  Any idea where I can look for more information?
13:16 blu_ (self hosted ovirt 3.5 node installation, with glusterfs 3.2 and ctdbd across 3 nodes
13:19 tdasilva joined #gluster
13:19 hagarth joined #gluster
13:20 theron joined #gluster
13:20 blu_ Any help? Happy to provide more info!
13:27 jmarley joined #gluster
13:31 Kins joined #gluster
13:31 nshaikh joined #gluster
13:31 kkeithley1 glusterfs-3.2? That's seriously old. 3.4.5 and 3.5.1 are current stable. Where are you getting gluster from?
13:32 blu_ Apologies. It is glusterfs 3.5.2 built on Jul 31 2014 18:41:16
13:33 kkeithley1 s/3.5.1/3.5.2/
13:33 glusterbot What kkeithley1 meant to say was: glusterfs-3.2? That's seriously old. 3.4.5 and 3.5.2 are current stable. Where are you getting gluster from?
13:33 kkeithley1 and all your ports are open?
13:33 kkeithley1 ,,(ports)
13:33 glusterbot glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
13:34 blu_ firewall-cmd --list-ports 55863/tcp 24009-24108/tcp 49152-49216/tcp 34865-34867/tcp 965/tcp 24007/tcp 4379/tcp 38468/tcp 54321/tcp 39543/tcp 161/udp 50152-50251/tcp 5900-5999/tcp 2049/tcp 38465-38467/tcp 111/tcp 963/udp 22/tcp
13:35 blu_ That was all based off http://community.redhat.com/blog​/2014/05/ovirt-3-4-glusterized/ (but running on Centos 7)
13:35 glusterbot Title: oVirt 3.4, Glusterized Red Hat Open Source Community (at community.redhat.com)
13:38 vimal joined #gluster
13:45 mojibake joined #gluster
13:50 LHinson joined #gluster
13:52 klaxa|work joined #gluster
13:59 justyns joined #gluster
14:00 justyns joined #gluster
14:00 justyns joined #gluster
14:01 justyns joined #gluster
14:05 rwheeler joined #gluster
14:11 hagarth joined #gluster
14:12 B21956 joined #gluster
14:16 sputnik13 joined #gluster
14:18 wushudoin| joined #gluster
14:18 B21956 joined #gluster
14:22 LHinson joined #gluster
14:25 xleo joined #gluster
14:33 diegows joined #gluster
14:37 gildub joined #gluster
14:42 sprachgenerator joined #gluster
14:42 kshlm joined #gluster
14:45 failshell joined #gluster
14:49 purpleidea kkeithley_: we are talking about storage things fwiw in downstairs
14:54 _dist joined #gluster
14:55 bennyturns joined #gluster
14:56 hagarth joined #gluster
14:59 hchiramm_ joined #gluster
15:01 dmachi joined #gluster
15:02 kkeithley_ purpleidea: yeah, is there dial-in info?
15:03 purpleidea kkeithley_: not sure what it is, i am here in person, come down, or i can brief you after on what it was about if yo uwant
15:03 dmachi I have 4 bricks that I would like to remove from a volume, the other bricks that are already there have plenty of free space to accomodate the data that resides on these bricks. Can I simply remove them, one at a time, and let self heal fix things?
15:04 kkeithley_ I'm on portante's bluejeans, wfh today
15:16 nshaikh joined #gluster
15:19 jobewan joined #gluster
15:21 plarsen joined #gluster
15:22 kumar joined #gluster
15:23 aravindavk joined #gluster
15:24 nothau joined #gluster
15:42 * nothau has set away! (auto away after idling [15 min]) [Log:OFF] .gz.
15:42 semiosis dmachi: i think remove-brick is the tool to use for that.  you should try it out on a test volume to make sure it works as you want.
15:42 semiosis nothau: please disable the auto-away message :)
15:42 nothau sorry
15:43 nothau thought it was saved
15:43 semiosis hehe no prob
15:43 * nothau is not set to auto-away anymore.
15:43 dmachi semiosis: ok thanks
15:43 semiosis yw
15:50 _dist joined #gluster
15:55 jmarley joined #gluster
16:02 nothau i think i've done that
16:02 nothau oopse
16:11 dtrainor joined #gluster
16:13 jmarley joined #gluster
16:18 elico joined #gluster
16:29 LHinson1 joined #gluster
16:35 RameshN joined #gluster
16:45 sadbox_ joined #gluster
16:46 glusterbot New news from newglusterbugs: [Bug 1144108] Spurious failure on disperse tests (bad file size on brick) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1144108>
16:48 jmarley joined #gluster
16:49 hagarth joined #gluster
16:51 fattaneh joined #gluster
17:00 kdhananjay joined #gluster
17:07 mojibake joined #gluster
17:10 fattaneh1 joined #gluster
17:21 tdasilva joined #gluster
17:21 zerick joined #gluster
17:29 RameshN joined #gluster
17:31 vu joined #gluster
17:43 vu joined #gluster
17:44 dtrainor joined #gluster
17:45 RicardoSSP joined #gluster
17:45 RicardoSSP joined #gluster
17:56 lalatenduM joined #gluster
17:59 nshaikh joined #gluster
18:04 Slashman joined #gluster
18:07 fattaneh1 left #gluster
18:16 alpha01 joined #gluster
18:16 chirino joined #gluster
18:29 LHinson joined #gluster
18:33 milka joined #gluster
18:33 milka hey, any advice on this error? http://pastie.org/9573174 glusterfs just stopped out of the blue and now wont start
18:33 glusterbot Title: #9573174 - Pastie (at pastie.org)
18:35 _Bryan_ joined #gluster
18:39 dtrainor joined #gluster
18:39 dtrainor joined #gluster
18:53 Pupeno joined #gluster
18:54 diegows joined #gluster
18:57 jiqiren joined #gluster
18:59 LHinson1 joined #gluster
19:01 LHinson1 left #gluster
19:04 Rafi_kc joined #gluster
19:12 gletessier joined #gluster
19:16 julim joined #gluster
19:23 andreask joined #gluster
19:23 dmachi1 joined #gluster
19:35 vu joined #gluster
19:42 Pupeno joined #gluster
20:08 elico joined #gluster
20:14 balacafalata joined #gluster
20:26 if-kenn_ joined #gluster
20:33 longshot902 joined #gluster
20:35 theron joined #gluster
20:55 longshot902 joined #gluster
21:12 Pupeno_ joined #gluster
21:15 frankS2 anyone tried gluster for web applications?
21:16 semiosis ya sure
21:17 glusterbot New news from newglusterbugs: [Bug 1131271] Lock replies use wrong source IP if client access server via 2 different virtual IPs [patch attached] <https://bugzilla.redhat.co​m/show_bug.cgi?id=1131271>
21:19 frankS2 semiosis: im trying to find some information about performance, i seen that some people say it was "pretty poor on php applications because it did not have a lock daemon" but as far as i can see it have a lock daemon. so im not sure if that statement was trure
21:20 semiosis old php apps that do tons of require/include may perform poorly on glusterfs
21:20 frankS2 im going to do a rails app, i was hoping to
21:20 semiosis modern php apps that use autoloading (and standard optimizations like APC & a well ordered include_path) should perform fine
21:20 longshot902_ joined #gluster
21:20 frankS2 "How can I improve the performance of reading many small files?
21:20 frankS2 Use the NFS client. For reading many small files, i.e. PHP web serving, the NFS client will perform much better.
21:20 frankS2 That that for a write-heavy load the native client will perform better."
21:20 semiosis rails is very different from php.  i'm not aware of any issues with rails apps on glusterfs
21:20 frankS2 gluster works with nfs client?
21:21 semiosis sure, although you lose high availability
21:21 frankS2 ah, thats what i want :p
21:21 frankS2 to have
21:21 Lee- I think it's better to use glusterfs for content and deploy your code directly to your app servers
21:21 semiosis +1
21:21 frankS2 content will be served from memory
21:21 frankS2 well 99% of it
21:22 frankS2 hopefully 100 =P
21:22 Lee- you dont use some form of backing storage? ;)
21:22 frankS2 yeah, but hopefully it will be most cache hits =P
21:24 Lee- in any case, my thoughts are glusterfs is best for dynamically generated data for which you need HA. App servers should not be your only store of your code and so rather than use gluster to serve your code, it's best to just deploy the code directly to the app servers. this is regardless of the technology
21:24 semiosis i agree
21:25 semiosis but in case you still want to run the code from glusterfs, that should work
21:26 T0aD- joined #gluster
21:26 ccha3 joined #gluster
21:27 toordog-work joined #gluster
21:28 partner joined #gluster
21:28 hflai joined #gluster
21:28 Ramereth joined #gluster
21:29 huleboer joined #gluster
21:32 Ramereth joined #gluster
21:35 frankS2 thanks for answers guys :)
21:38 Pupeno joined #gluster
21:47 nage joined #gluster
22:05 Pupeno joined #gluster
22:11 dtrainor joined #gluster
22:13 dtrainor joined #gluster
22:15 Pupeno_ joined #gluster
22:22 XpineX_ joined #gluster
22:38 dtrainor joined #gluster
23:09 bennyturns joined #gluster
23:51 elico joined #gluster
23:57 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary