Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:21 atrius joined #gluster
01:22 topshare joined #gluster
01:32 calisto joined #gluster
01:40 DV joined #gluster
01:57 DV joined #gluster
01:59 harish joined #gluster
02:07 tdasilva joined #gluster
02:20 DV joined #gluster
02:33 baojg joined #gluster
02:33 bala joined #gluster
03:01 gildub joined #gluster
03:06 kshlm joined #gluster
03:08 gildub joined #gluster
03:23 bala joined #gluster
03:25 bharata-rao joined #gluster
03:40 baojg joined #gluster
03:40 aravindavk joined #gluster
03:44 buhman joined #gluster
03:48 kanagaraj joined #gluster
03:55 itisravi joined #gluster
04:02 aravindavk joined #gluster
04:06 shubhendu joined #gluster
04:13 saurabh joined #gluster
04:24 baojg joined #gluster
04:28 RameshN joined #gluster
04:29 kdhananjay joined #gluster
04:37 rafi joined #gluster
04:37 Rafi_kc joined #gluster
04:37 ndarshan joined #gluster
04:39 anoopcs joined #gluster
04:40 rafi joined #gluster
04:41 hagarth joined #gluster
04:43 baojg joined #gluster
04:45 baojg joined #gluster
04:59 soumya_ joined #gluster
05:01 nbalachandran joined #gluster
05:03 spandit joined #gluster
05:04 atalur joined #gluster
05:06 jiffin joined #gluster
05:23 baojg joined #gluster
05:27 meghanam joined #gluster
05:31 sahina joined #gluster
05:39 elico joined #gluster
05:43 atrius joined #gluster
05:44 jiffin joined #gluster
05:49 bharata-rao joined #gluster
05:52 rjoseph joined #gluster
05:53 ramteid joined #gluster
05:54 ppai joined #gluster
05:55 overclk joined #gluster
05:59 lalatenduM joined #gluster
06:06 baojg joined #gluster
06:06 bala joined #gluster
06:10 free_amitc_ joined #gluster
06:16 Anuradha joined #gluster
06:17 shubhendu joined #gluster
06:19 ndarshan joined #gluster
06:20 sahina joined #gluster
06:22 baojg joined #gluster
06:24 ricky-ticky joined #gluster
06:30 dusmant joined #gluster
06:35 SOLDIERz joined #gluster
06:38 deepakcs joined #gluster
06:43 nshaikh joined #gluster
06:46 ndarshan joined #gluster
06:50 ppai joined #gluster
06:50 jiffin joined #gluster
07:02 sahina joined #gluster
07:02 shubhendu joined #gluster
07:03 bharata-rao joined #gluster
07:04 dusmant joined #gluster
07:06 Fen2 joined #gluster
07:07 anil joined #gluster
07:07 itisravi joined #gluster
07:10 kumar joined #gluster
07:15 ctria joined #gluster
07:17 ricky-ticky1 joined #gluster
07:22 hchiramm_ joined #gluster
07:31 dusmant joined #gluster
07:38 LebedevRI joined #gluster
07:43 Humble joined #gluster
07:57 mbukatov joined #gluster
08:03 baojg joined #gluster
08:06 ppai joined #gluster
08:08 [Enrico] joined #gluster
08:12 gildub joined #gluster
08:13 atrius joined #gluster
08:18 DV joined #gluster
08:20 deniszh joined #gluster
08:22 bjornar joined #gluster
08:26 vimal joined #gluster
08:27 baojg joined #gluster
08:28 Philambdo joined #gluster
08:32 bala joined #gluster
08:35 raghu` joined #gluster
08:35 kshlm joined #gluster
08:35 T0aD joined #gluster
08:45 kovshenin joined #gluster
08:48 fsimonce joined #gluster
08:53 liquidat joined #gluster
09:01 DV joined #gluster
09:01 ppai joined #gluster
09:10 Slashman joined #gluster
09:26 bala joined #gluster
09:28 lalatenduM joined #gluster
09:35 kshlm joined #gluster
09:46 johndescs_ joined #gluster
09:47 bala joined #gluster
09:53 deepakcs joined #gluster
09:54 tryggvil joined #gluster
09:56 Fen2 joined #gluster
10:01 ctria joined #gluster
10:05 anil joined #gluster
10:08 lalatenduM_ joined #gluster
10:12 glusterbot News from newglusterbugs: [Bug 1152956] duplicate entries of files listed in the mount point after renames <https://bugzilla.redhat.com/show_bug.cgi?id=1152956>
10:31 sahina joined #gluster
10:33 kaushal_ joined #gluster
10:45 baojg joined #gluster
10:47 harish joined #gluster
10:48 dusmant joined #gluster
10:48 shubhendu joined #gluster
10:48 ndarshan joined #gluster
10:49 ppai joined #gluster
10:52 bala joined #gluster
10:58 ctria joined #gluster
11:08 atrius joined #gluster
11:08 ppai joined #gluster
11:11 rjoseph joined #gluster
11:44 glusterbot News from resolvedglusterbugs: [Bug 1140844] Read/write speed on a dispersed volume is poor <https://bugzilla.redhat.com/show_bug.cgi?id=1140844>
11:47 drankis joined #gluster
11:50 meghanam_ joined #gluster
11:50 meghanam__ joined #gluster
11:50 shubhendu joined #gluster
11:52 ndarshan joined #gluster
11:52 ghenry joined #gluster
11:52 bala joined #gluster
11:59 tdasilva joined #gluster
12:13 azar joined #gluster
12:18 feeshon joined #gluster
12:25 baojg joined #gluster
12:25 azar I have made a disperse volume with '1' redundancy, so I have 3 bricks in it. I have created a file in the volume and echo a string in  it, When I open that file, it shows some garbage. Can anyone explain why??I have install glusterfs.3.6.1.
12:27 baojg joined #gluster
12:28 DV joined #gluster
12:42 dusmant joined #gluster
12:43 hybrid512 Hi
12:43 glusterbot hybrid512: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:43 hybrid512 I get this error : "volume remove-brick commit force: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600."
12:43 glusterbot hybrid512: set the desired op-version using ''gluster volume set all cluster.op-version $desired_op_version''.
12:44 hybrid512 thx, smart bot you are :)
12:47 baojg joined #gluster
12:48 baojg_ joined #gluster
12:49 nbalachandran joined #gluster
12:50 DV joined #gluster
12:53 soumya joined #gluster
12:55 ramon_dl joined #gluster
12:57 ramon_dl azar: you found garbage reading file back from fuse mount?
12:59 baojg joined #gluster
12:59 Fen1 joined #gluster
13:00 topshare joined #gluster
13:01 baojg_ joined #gluster
13:01 topshare joined #gluster
13:01 nbalachandran joined #gluster
13:05 ctria joined #gluster
13:12 T0aD joined #gluster
13:18 Norky joined #gluster
13:20 azar ramon_dl: yes. Do you know why???
13:23 ramon_dl azar: No, I don't know. Isn't expected behavior. Could you please, explain a little more your setup?
13:25 VeggieMeat_ joined #gluster
13:25 ramon_dl azar: disperse breaks original file in two parts ( for 3.1 config) and from these two part computes three, one for each brick. If you look inside these parts you must found garbage but reading file from volume mount (fuse or nfs) you must see original content.
13:30 azar ok, I have created a volume with 3 bricks with disperse feature and 1 redundancy. I mount the volume with: "mount -t glusterfs ip:/vol-name /mnt" command. I run the command in "cd /mnt  and then echo hii >> /mnt/a.txt" then I run "vi /mnt/a.txt". here everythings was fine. then I run "cat how are u >> /mnt/a.txt" & "vi /mnt/a.txt". here I could see the "hii" text ,beside some garbage was shown
13:31 lpabon joined #gluster
13:32 ramon_dl azar: interesting. I'll try to reproduce it, may be is a bug...
13:32 azar ramon_dl: Are you gluster developer??
13:33 ramon_dl No, sorry.
13:33 julim joined #gluster
13:34 ramon_dl azar: well, I'm working in DataLab who's developing disperse translator
13:34 sahina joined #gluster
13:35 ramon_dl azar: I'll pass the issue to xavih who's in charge of disperse translator.
13:35 azar ramon_dl: Wow, so nice. I am trying to make some changes in disperse translator.
13:36 atrius joined #gluster
13:37 ramon_dl azar: let us know if we can help
13:38 ramon_dl azar: I'm also curious about these changes...
13:39 ramon_dl azar: I'll be out for a while...
13:40 hagarth joined #gluster
13:40 bene joined #gluster
13:43 glusterbot News from newglusterbugs: [Bug 1158067] Gluster volume monitor hangs glusterfsd process <https://bugzilla.redhat.com/show_bug.cgi?id=1158067>
13:46 diegows joined #gluster
13:47 shubhendu joined #gluster
13:47 bennyturns joined #gluster
13:48 kkeithley joined #gluster
13:50 bala joined #gluster
13:52 nocturn00 joined #gluster
13:52 nocturn00 Hi, how can I make glusterd only listen on one of my two ips?
13:55 theron joined #gluster
14:05 kshlm joined #gluster
14:05 edward1 joined #gluster
14:06 aravindavk joined #gluster
14:06 morse joined #gluster
14:07 Fen1 joined #gluster
14:07 xavih azar: there's bug 1161885 that covers this problem. Currently there's a patch but it's still under review (http://review.gluster.org/9080/). Most probably it will be included in 3.6.2.
14:07 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1161885 urgent, unspecified, ---, xhernandez, POST , Possible file corruption on dispersed volumes
14:08 Fen1 joined #gluster
14:09 Fen1 joined #gluster
14:10 azar xavih: Thanks alot.
14:10 xavih azar: yw
14:10 virusuy joined #gluster
14:13 dusmant joined #gluster
14:21 B21956 joined #gluster
14:25 rbennacer left #gluster
14:26 jskinner_ joined #gluster
14:27 jskinner_ left #gluster
14:30 ctria joined #gluster
14:30 theron joined #gluster
14:31 tdasilva joined #gluster
14:35 topshare Someone use zfs on linux?
14:35 topshare Please give me some suggestions?
14:36 topshare zfs + gluster
14:37 plarsen joined #gluster
14:46 Norky that's a meta question
14:47 Norky just state your problem/ask your question and if someone can help, they will
14:55 chirino joined #gluster
14:56 lalatenduM joined #gluster
14:59 sahina joined #gluster
15:00 theron joined #gluster
15:03 jmarley joined #gluster
15:05 bene joined #gluster
15:07 Norky nocturn00, this is not exactly what you were asking, but would firewalling gluster on the IP you don't want work?
15:09 ira joined #gluster
15:10 gothos1 joined #gluster
15:10 gothos1 Heya. Big fun.
15:11 gothos1 We had a nice problem with glusterfs 3.6.1 on centos 6/7 today, when our guys here had their LANG set to de_DE.UTF-8 they couldn't mount the glusterfs, but setting it to en_US fixed the problem.
15:12 gothos1 does anyone here know if there is a known bug? I haven't investigates further yet
15:15 jvandewege Anyone know what the following mean when trying to start a VM on oVirt using a gluster SD?  E [rpc-clnt.c:362:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f0f46d79396] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f0f49e30fce] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f0f49e310de] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x82)[0
15:15 glusterbot jvandewege: ('s karma is now -50
15:15 jvandewege x7f0f49e32a42] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x48)[0x7f0f49e331f8] ))))) 0-GlusterTest-client-0: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2014-11-24 15:09:13.055308 (xid=0x3)
15:15 glusterbot jvandewege: ('s karma is now -51
15:15 glusterbot jvandewege: ('s karma is now -52
15:15 glusterbot jvandewege: ('s karma is now -53
15:15 glusterbot jvandewege: ('s karma is now -54
15:17 gothos1 now that but is really well implmented
15:18 marcoceppi joined #gluster
15:18 marcoceppi joined #gluster
15:24 atrius joined #gluster
15:26 calisto joined #gluster
15:29 bennyturns joined #gluster
15:41 plarsen joined #gluster
15:42 davemc joined #gluster
15:46 _Bryan_ joined #gluster
15:49 nshaikh joined #gluster
15:56 ildefonso joined #gluster
16:01 jmarley joined #gluster
16:05 haomaiwang joined #gluster
16:05 jmarley joined #gluster
16:06 NigeyS joined #gluster
16:07 bennyturns joined #gluster
16:07 lmickh joined #gluster
16:08 NigeyS hey :) only a quick Q .. is there a recommended structure for creating the bricks, as far as theyre location goes? i've seen people use /data/glusterfs/ etc ..
16:08 coredump joined #gluster
16:13 lflores joined #gluster
16:13 meghanam__ joined #gluster
16:13 meghanam_ joined #gluster
16:15 n-st joined #gluster
16:16 bala joined #gluster
16:23 failshell joined #gluster
16:23 soumya joined #gluster
16:31 Norky I use /bricks/VOLNAME/
16:31 Norky use what you like
16:34 lflores joined #gluster
16:37 NigeyS okies, i tried /data/glusterfs/Volname but i get a warning about it beign on the root FS ?
16:37 elico joined #gluster
16:40 Norky well, yes
16:41 baojg joined #gluster
16:41 Norky A brick should not be part of the root filesystem.
16:42 rotbeard joined #gluster
16:42 Norky also, the brick directory should not be itself the root of a separate filesystem
16:42 theron joined #gluster
16:42 Norky e.g.
16:43 Norky mkfs.xfs /dev/vgbricks/foo
16:43 Norky mkdir /bricks/foo ; mount /dev/vgbricks/foo /bricks/foo
16:43 Norky mkdir /bricks/foo/brick
16:43 Norky gluster volume create foo hostname:/bricks/foo/brick
16:44 ildefonso I think you need the -i size thing for mkfs.xfs.
16:44 Norky I'm leaving out details superfluous to the question at hand
16:45 ildefonso yeah, but I have bet people who just copy/paste the commands given at irc :(
16:45 ildefonso s/bet/met/
16:45 glusterbot What ildefonso meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:45 Norky oh god
16:45 ildefonso yeah :(
16:45 Norky this bot...
16:46 Norky glusterbot is senile
16:48 NigeyS right ok, so reading between the lines, a brick would be a totally seperate partition ?
16:48 NigeyS this setup is being configured on AWS so my way around that would be a new partition dedicated to the brick..
16:49 semiosis NigeyS: i use separate EBS vols for each brick
16:49 semiosis without any partitioning or lvm.  just xfs formatted ebs vols
16:50 NigeyS ahhh now that sounds like an idea, i'll look into that, im still not happy running this on aws for websites meh..lol
16:54 hagarth joined #gluster
16:56 NigeyS Norky any particular reason for XFS as opposed to ext?
16:57 semiosis ildefonso: the -i 512 option for xfs isn't recommended anymore since someone did some tests & said it was unnecessary.  i dont have data to back that up, but then i never had data to support using that option either
16:58 semiosis NigeyS: glusterfs is used & tested more extensively with XFS, but it should work with ext4 as well
16:58 NigeyS i see, okies, just wondered if it was a performance thing
16:58 saurabh joined #gluster
16:59 semiosis NigeyS: on aws your performance bottleneck is almost certainly going to be network & EBS limits, not CPU/kernel
16:59 NigeyS yup, its the network im slightly concerned about
16:59 ctria joined #gluster
16:59 ildefonso semiosis, interesting, so, it would be interesting try to find some data about it, and update documents.
16:59 semiosis ildefonso++
16:59 glusterbot semiosis: ildefonso's karma is now 1
17:00 semiosis if you run the tests please feel free to put the results on the gluster.org community wiki :)
17:00 Norky semiosis, what's EBS? The Amazon cloud storage thing?
17:00 ildefonso I actually have a just setup test bench, I guess I could use it for testing (I can wipe and recreate data at will there :D ).
17:01 semiosis @lucky aws ebs
17:01 glusterbot semiosis: http://aws.amazon.com/ebs/
17:01 Norky yes, then :)
17:02 PeterA joined #gluster
17:02 NigeyS semiosis do you have any specific options set on your volumes for using on EBS?
17:03 NigeyS cache size, refresh etc
17:05 semiosis defaults
17:06 virusuy joined #gluster
17:06 virusuy joined #gluster
17:07 NigeyS oh nice, i'll create some new volumes and see how it goes.
17:09 Norky NigeyS, you're using an AWS service then? Might be an idea to specify that
17:09 NigeyS yup, not out of choice mind.
17:09 Norky my normal assumption for peopel first trying glusterfs is they're experimenting with a couple of "white box" PCs or VMs in a self-hosted environment
17:10 Norky granted that's due to my own experiential bias
17:10 NigeyS ahh, well atm its all in a test setup on AWs, but yeah production version will also be on AWS.. works idea of "saving money" as opposed to the hardware we have atm.. :/
17:11 Norky hey, I'm not knocking it, just suggesting it'd be worth your mentioning that at the start :)
17:12 NigeyS :) it's going to perform terribly, but they do not listen, glusterfs simply isnt for running websites with a ton of small file reads.
17:13 Norky ahh, no, that is a weak point for GlsuterFS
17:13 Norky not PHP is it?
17:13 JoeJulian Nothing is.
17:13 NigeyS sure is
17:13 JoeJulian That's what front-end cache is for.
17:13 NigeyS they have their own php CMS system
17:13 Norky heh, best of British to you then ;)
17:14 JoeJulian If you're optimizing storage performace for a web site, you're doing it wrong.
17:14 NigeyS the CMS does have some caching of it's own
17:15 jmarley joined #gluster
17:17 NigeyS JoeJulian about 100 websites, not just 1.
17:18 zerick joined #gluster
17:19 JoeJulian That doesn't change the philosophy. Cache as much as you can as close to the user as possible.
17:19 JoeJulian @php
17:19 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
17:19 glusterbot JoeJulian: --fopen-keep-cache
17:20 Norky remove "the" and "system call" from that factoid, it shoudl fit on one line then :)
17:20 NigeyS JoeJulian just found your blog, and reading your php post now :)
17:21 semiosis as much as i love glusterfs, after running 100s of websites on it for a couple years i moved the web stuff off of glusterfs to local disk storage.  now we deploy sites from gitlab with some custom scripts
17:21 georgeh-LT2 joined #gluster
17:21 georgeh-LT2 joined #gluster
17:22 NigeyS semiosis there was talk of something similar but they decided against it, no idea why.
17:23 Norky I did wonder if giving php libgfapi would help the problem some
17:23 Norky but that'd be trying to fix something fundamentally broken-by-design
17:24 georgeh-LT2 joined #gluster
17:33 Norky cheerio all
17:36 lachy joined #gluster
17:41 baojg joined #gluster
17:42 morse joined #gluster
17:43 rotbeard joined #gluster
17:44 glusterbot News from newglusterbugs: [Bug 1167419] EC_MAX_NODES is defined incorrectly <https://bugzilla.redhat.com/show_bug.cgi?id=1167419>
17:44 lalatenduM joined #gluster
17:45 lflores2 joined #gluster
17:49 ramon_dl EC_MAX_NODES is defined correctly: we only allow redundant nodes less than half of total. That's by design. If you plan to use half or more nodes for redundancy better use AFR with replica 2 or 3.
17:52 coredump joined #gluster
17:52 davemc Tomorrow. Please join us. RT @gluster: New blog post: GlusterFS Future Features: BitRot detection http://bit.ly/14SjnSr
18:02 rotbeard joined #gluster
18:03 kovshenin joined #gluster
18:06 jobewan joined #gluster
18:37 diegows joined #gluster
18:49 JoeJulian @paste
18:49 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
18:51 JoeJulian @factoids change paste 1 s/debian and ubuntu/debian, ubuntu, and arch/
18:51 glusterbot JoeJulian: The operation succeeded.
18:59 tdasilva joined #gluster
19:02 Maitre Zuh?
19:12 atrius joined #gluster
19:15 longshot902 joined #gluster
19:17 georgeh-LT2 joined #gluster
19:23 eljrax joined #gluster
19:24 eljrax left #gluster
19:30 baojg joined #gluster
19:31 lmickh joined #gluster
19:32 jackdpeterson joined #gluster
19:33 jackdpeterson Is there a way to rename a gluster volume in 3.6?
19:35 semiosis not that i know of
19:42 lalatenduM jackdpeterson, I dont think so
19:43 jackdpeterson @semiosis @lalatenduM -- Thanks. I've just modified my puppet scripts to expect that then :-)
19:43 tdasilva joined #gluster
19:51 diegows joined #gluster
19:55 B21956 joined #gluster
19:56 georgeh-LT2 joined #gluster
19:57 longshot902 joined #gluster
20:01 longshot902_ joined #gluster
20:05 lflores joined #gluster
20:07 coredump joined #gluster
20:10 longshot902__ joined #gluster
20:10 atrius joined #gluster
20:11 feeshon joined #gluster
20:24 T0aD joined #gluster
20:26 longshot902_ joined #gluster
20:41 longshot902__ joined #gluster
20:45 coredump|br joined #gluster
20:51 longshot902_ joined #gluster
20:53 longshot902 joined #gluster
20:59 skippy how can I see what process / task has locks on a server?  Gluster tasks on one server are reporting that "Locking failed" on the replica server.
21:02 georgeh-LT2_ joined #gluster
21:04 skippy it seems like after upgrading to 3.6.1 I'm seeing these locking isuses.
21:04 skippy I don't recall these at all with 3.5.2
21:05 skippy if I restart the glusterd service, the lock is cleared and I can perform whatever tasks I need.  But eventually something will lock ahain and keep that lock.
21:30 gildub joined #gluster
21:34 chirino joined #gluster
21:36 Ube_ joined #gluster
21:40 rshott joined #gluster
21:50 tryggvil joined #gluster
22:07 georgeh-LT2_ joined #gluster
22:12 firemanxbr joined #gluster
22:13 badone joined #gluster
22:16 calum_ joined #gluster
22:19 sputnik13 joined #gluster
22:20 elico joined #gluster
22:29 msmith joined #gluster
22:32 T0aD joined #gluster
22:34 PeterA joined #gluster
22:38 ira joined #gluster
22:44 badone joined #gluster
22:47 feeshon joined #gluster
23:01 diegows joined #gluster
23:01 B21956 left #gluster
23:04 longshot902_ joined #gluster
23:04 baojg joined #gluster
23:16 DV joined #gluster
23:57 stomith joined #gluster
23:58 stomith hey all - quick question. I know gluster manages the file store, but with two clients, are you supposed to mount both filesystems off of one host, or each mounts its own data store? or am I missing something?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary