Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:24 mkzero joined #gluster
00:40 badone joined #gluster
01:03 thogue joined #gluster
01:20 FarbrorLeon joined #gluster
01:27 _pol joined #gluster
01:38 _pol joined #gluster
02:07 semiosis joined #gluster
02:07 semiosis joined #gluster
02:11 TrDS left #gluster
02:12 semiosis joined #gluster
02:23 dylan_ joined #gluster
02:46 _ilbot joined #gluster
02:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:48 dylan_ joined #gluster
03:33 badone joined #gluster
03:52 jag3773 joined #gluster
04:08 _pol joined #gluster
04:11 schrodinger_ joined #gluster
04:14 vpshastry joined #gluster
04:15 Shdwdrgn joined #gluster
04:23 TDJACR joined #gluster
04:27 tjikkun joined #gluster
04:27 tjikkun joined #gluster
04:42 bulde joined #gluster
04:43 mattf joined #gluster
04:45 Paul-C joined #gluster
04:45 Paul-C left #gluster
04:54 Cenbe joined #gluster
05:02 yinyin joined #gluster
05:15 vpshastry joined #gluster
05:20 vpshastry joined #gluster
05:50 zeittunnel joined #gluster
05:59 purpleidea it seems that if i create a feature branch with more than one commit, they get the same change-id. The problem is then that, gerrit doesn't accept this because it either wants multiple different change id's or one big commit. what's the recommended way of uploading a feature branch of related commits?
06:03 neofob left #gluster
06:19 pithagorians joined #gluster
06:34 jag3773 joined #gluster
06:53 badone_ joined #gluster
07:17 vpshastry left #gluster
07:18 KORG|2 joined #gluster
07:43 satheesh1 joined #gluster
07:47 brosner joined #gluster
08:16 brosner joined #gluster
08:16 askb joined #gluster
08:17 askb joined #gluster
08:17 askb joined #gluster
08:18 askb joined #gluster
08:19 askb joined #gluster
08:21 jiqiren joined #gluster
08:22 askb joined #gluster
08:23 askb joined #gluster
08:23 askb joined #gluster
08:24 askb joined #gluster
08:26 askb joined #gluster
08:26 askb joined #gluster
08:27 askb joined #gluster
08:28 askb joined #gluster
08:29 askb joined #gluster
08:29 askb joined #gluster
08:30 askb joined #gluster
08:31 askb joined #gluster
08:31 askb joined #gluster
08:32 askb joined #gluster
08:34 askb joined #gluster
08:40 askb joined #gluster
08:49 badone_ joined #gluster
09:14 pithagorians joined #gluster
09:46 psyl0n joined #gluster
09:59 RedShift joined #gluster
10:17 askb joined #gluster
10:18 TrDS joined #gluster
10:24 glusterbot New news from resolvedglusterbugs: [Bug 846240] [FEAT] quick-read should use anonymous fd framework <https://bugzilla.redhat.com/show_bug.cgi?id=846240>
11:21 clag_ joined #gluster
11:21 tomased joined #gluster
11:26 KORG joined #gluster
12:27 diegows joined #gluster
12:42 mohankumar joined #gluster
12:44 dylan_ joined #gluster
12:46 meghanam joined #gluster
12:46 meghanam_ joined #gluster
13:02 TDJACR joined #gluster
13:06 mattf joined #gluster
13:07 mattf joined #gluster
13:08 mattf joined #gluster
13:09 _pol joined #gluster
13:34 ndevos purpleidea: before executing ./rfc.sh for the change for the other branch, use 'git commit --amend' to remove the ChangeId line, a new one will be generated automatically
13:51 sudhakar joined #gluster
13:53 sudhakar i am running a 8 node cluster and root volumes run out of space.. so resized the EBS volumes and starting the glusterd service
13:53 sudhakar while starting in 3 nodes the service is failing with the below error
13:53 sudhakar http://pastie.org/8552053
13:53 glusterbot Title: #8552053 - Pastie (at pastie.org)
13:53 sudhakar can some one help me?
13:53 bennyturns joined #gluster
13:53 sudhakar semiosis?
13:55 sudhakar all the nodes has the same configs... not sure whats causing this issue
14:12 vpshastry joined #gluster
14:12 vpshastry left #gluster
14:24 neofob joined #gluster
14:25 hagarth sudhakar: check if transport-type in glusterd.vol reads rdma only instead of socket,rdma
14:27 purpleidea ndevos: so this works in getting the patches submitted, but in gerrit, it doesn't see them as "logically" being part of the same group... is there some way for this to happen?
14:28 ndevos purpleidea: I dont think they are part of one group, because they are sent to different branches
14:28 purpleidea ndevos: what do you mean sorry?
14:29 ndevos purpleidea: only patches that depend on each other are part of a series - I had the impression you filed patches for different branches?
14:30 purpleidea ndevos: oh, no sorry... let's say i make a feature branch, and in that branch i commit three times (to keep a logical separation) ... i then want to push that feature branch. either it forces me to squash the commits into one (which undoes the nice commit separation) or i have to have three different change id's, which then breaks it apart logically in gerrit... so what gives?
14:31 vpshastry joined #gluster
14:31 sudhakar hagarth
14:31 sudhakar hagarth - all the nodes is having the socket,rdma
14:32 sudhakar only three nodes are failing with this error
14:33 ndevos purpleidea: ah, right, just ./rfc.sh in the branch and it'll create 3 review requests - regression tests should be run on all, or can be run on only the last commit
14:33 purpleidea ndevos: okay, so it did. but unfortunately it looks like three separate things in gerrit. what's the point of bothering to separate it into three commits... think patch sets
14:34 ndevos purpleidea: when you see the review request of the most recent commit in gerrit, you can see that it has a 'parent'
14:34 ndevos purpleidea: can you pass a url or change-id so I can have a look?
14:34 purpleidea ndevos: so if someone has a major feature branch, gerrit will split it up into 100 different webpages ? :P
14:35 ndevos purpleidea: yes, and each patch needs a review :)
14:35 purpleidea ndevos: that's fair. so how come this: http://review.gluster.org/#/c/6000/ has like multiple "patch sets" ?
14:35 glusterbot Title: Gerrit Code Review (at review.gluster.org)
14:36 ndevos purpleidea: a "patch set" is a revision of the patch, if you need to make a change, keep the change-id but squash additional patches in it, and ./rfc.sh again
14:37 purpleidea ndevos: ah, i see. thanks. is there some way in gerrit to know which X patches are all part of feature branch foo?
14:39 ndevos purpleidea: I think you need to click on 'topic' on the left in the gerrit webui for that
14:41 purpleidea ndevos: ah. topic is 'rfc' there are lots of unrelated things there... is that what comes from the Bug ID entry that i skipped?
14:41 ndevos purpleidea: but well, if there is no Bug attached to the patch, the topic is set to 'rfc' and it'll be more diffucult
14:41 ndevos yes, that
14:42 purpleidea ndevos: cool. okay, so to keep things together i'll make up a fake bug id!
14:42 purpleidea (sorry, i'm new to gerrit, so thanks for the patience)
14:42 ndevos purpleidea: it's better to file a bug for the change you want to make
14:42 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:42 purpleidea ndevos: seems like unnecessary overhead :P
14:43 purpleidea for small patches that is...
14:43 ndevos purpleidea: yeah, it feels like that... sometimes you can re-use an existing (still open) bug with a general description
14:44 purpleidea ndevos: cool, well i guess i'll hack some more and push more patches.
14:45 ndevos purpleidea: you could file a bug with subject 'include and improve puppet-gluster documentation' and keep that open until all possible patches are accepted ;)
14:45 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:45 purpleidea once my patches have been submitted and they build successfully, do i have do anything else, or should someone probably see this in gerrit?
14:46 ndevos I hope the devs would see it, and the people that watch the documentation can have notifications set (email on patch submission)
14:47 purpleidea ndevos: cool. now the only question remains: why are we hacking on the weekend ;)
14:47 ndevos but, personally I poke devs to review my patches if I know who would be working on that
14:48 ndevos purpleidea: I got asked to have a look at an interesting nfs issue, wireshark + linux-debugging are a missing some bits and I'm extending that atm :)
14:48 purpleidea ndevos: there is no such thing as an interesting nfs issue :P
14:48 purpleidea only strange behaviour
14:49 ndevos purpleidea: haha, well, I'm a support engineer and mainly work with nfs and gluster (well, Red Hat Storage), you'd be surprised how people break filesystems
14:50 purpleidea ndevos: remember, if they break it, they get to keep both pieces!
14:50 ndevos and some problems are really interesting :) in this case we're blaming gluster/nfs for a stale filehandle, but debugging tools dont show it
14:50 ndevos so, I'm fixing the debugging tools now
14:51 ndevos purpleidea: if they break community gluster yes, but if they break RHS we'll have to fix it...
14:52 ndevos well, the gluster team would need to fis the community gluster bits... but they dont get the enterprise workloads that rhs seems to see
14:52 purpleidea hm! cool! actually, i head HA friendly pNFS support will come to glusterfs... that's a cool feature for the nfs burdened ones
14:52 ndevos s/fis/fix/
14:52 glusterbot What ndevos meant to say was: well, the gluster team would need to fix the community gluster bits... but they dont get the enterprise workloads that rhs seems to see
14:52 purpleidea s/head/heard/
14:52 glusterbot What purpleidea meant to say was: hm! cool! actually, i heard HA friendly pNFS support will come to glusterfs... that's a cool feature for the nfs burdened ones
14:53 ndevos yes, I think there are guys from the dev team working on integrating with ganesha-nfs, there was a blog post about it a couple of weeks ago
14:59 purpleidea ndevos: okay! off to get coffee. thanks later!
14:59 ndevos cya purpleidea!
15:10 _pol joined #gluster
15:15 sudhakar issue starting in one of the gluster node.. any help is appreciated
15:15 sudhakar http://pastie.org/8552152
15:15 glusterbot Title: #8552152 - Pastie (at pastie.org)
15:18 RedShift joined #gluster
15:23 pdrakeweb joined #gluster
15:23 ndevos sudhakar: you should check if /var/lib/glusterd/peers/ contains valid files, I've seen such errors when there was an empty file in that dir
15:24 sudhakar hi ndevos - i noticed it some blogs.. and just copied the peers/* files from a working node
15:24 sudhakar and still having the same problem
15:25 sudhakar rebooting the instance and will give a try
15:25 ndevos sudhakar: those are normal text files, you can check their contents - filename should be the uuid for the storage server
15:26 ndevos sudhakar: also, any .tmp files should not be there
15:26 sudhakar ok..
15:26 sudhakar http://pastie.org/8552179
15:26 glusterbot Title: #8552179 - Pastie (at pastie.org)
15:26 sudhakar let me check again
15:27 sudhakar i don't see any .tmp file in peers folder
15:29 ndevos sudhakar: oh, and the /peers/ directory should not contain a file for the storage server itself, check the uuid in /var/lib/glusterd/glusterd.info
15:30 TrDS in a replicated setup, should every brick contain every directory (even when files are stored on other bricks)?
15:30 TrDS err sorry... i mean in a distributed setup
15:30 ndevos TrDS: yes, directories should be created on all bricks
15:30 TrDS ndevos: thx
15:30 sudhakar ndevos - ack... checking
15:32 sudhakar ndevos - the glusterd.info file is empty
15:32 sudhakar -rw------- 1 root root    0 Dec 14 13:40 glusterd.info
15:33 sudhakar i checked the other working nodes and its has the uuid & operating-version
15:33 ndevos sudhakar: that could well be the cause
15:33 sudhakar ok..
15:33 sudhakar any way i can fix it?
15:33 ndevos sudhakar: if you check the /peers/ files on the other storage servers, you can find the uuid for the empty glusterd.info
15:33 sudhakar ok..
15:33 sudhakar let me check
15:34 ndevos after that, create the file based on the contents of the others, but with correct hostname/ip + uuid and start glusterd
15:34 sudhakar ok..
15:38 sudhakar ndevos - no luck
15:39 sudhakar still having the same issue
15:39 sudhakar updated glusterd.info ..
15:41 emwe_ joined #gluster
15:41 emwe_ hi all
15:41 ndevos sudhakar: hmm, no further ideas, but I think checking all the /peer/ files and glusterd.info should get you somewhere
15:42 sudhakar ok.. thanks ndevos
15:42 sudhakar will check again
15:42 ndevos sudhakar: you can also start 'glusterd --log-level=DEBUG' to see if you get more info
15:42 sudhakar ok.. sure
15:46 emwe_ I was wondering if someone could give me a hint to get a SSL-secured gluster setup working?
15:47 vpshastry left #gluster
16:21 sac`away joined #gluster
16:38 vpshastry joined #gluster
16:53 psyl0n joined #gluster
16:53 psyl0n joined #gluster
16:55 ozux joined #gluster
17:10 _pol joined #gluster
17:13 sac`away` joined #gluster
17:14 vpshastry left #gluster
17:15 Cenbe_ joined #gluster
17:19 haritsu joined #gluster
17:24 sac`away joined #gluster
17:39 _pol joined #gluster
17:39 _pol joined #gluster
17:46 mohankumar joined #gluster
17:53 gamayun joined #gluster
18:07 _ilbot joined #gluster
18:07 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
18:08 semiosis joined #gluster
18:22 sudhakar wondering if someone could give me the right performance improvement options for glusterFS...
18:22 sudhakar we have large no of folders & write... both read & write operations are taking more than 2 mins..
18:23 sudhakar example:- "ls -l" is taking 2+ mins in the client side..
18:23 sudhakar and that folder has 140 sub directories to list...
18:25 psyl0n joined #gluster
18:25 psyl0n joined #gluster
18:47 social sudhakar: readdir optimize, any ls and metadata operation is by definition expensive on network filesystem
18:50 sudhakar the glusterFS and the clinets are on the same AWS VPC... hope its a 10 GbE Network
18:50 sudhakar https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf
18:51 verdurin_ joined #gluster
18:51 sudhakar here the graph shows a linear scale raise on the read & write thoughputs
18:51 sudhakar when they increase the nodes
18:51 sudhakar i have a 8 node cluster and still my write throughput is 50-60MBs
18:52 vpshastry joined #gluster
18:52 sudhakar here is the simple dd command i am using for the benchmark
18:52 sudhakar dd if=/dev/zero of=testfile bs=1M count=1000
19:03 vpshastry joined #gluster
19:04 vpshastry left #gluster
19:10 lyasota joined #gluster
19:11 FarbrorLeon joined #gluster
19:30 pureflex joined #gluster
19:30 Nuxr0 joined #gluster
20:29 mattapp__ joined #gluster
20:44 davidbierce joined #gluster
21:22 FarbrorLeon joined #gluster
21:35 rotbeard joined #gluster
21:38 FarbrorLeon joined #gluster
21:44 leblaaanc joined #gluster
22:01 FarbrorLeon joined #gluster
22:08 TrDS left #gluster
22:16 mattappe_ joined #gluster
22:17 leblaaanc joined #gluster
22:27 pdrakeweb joined #gluster
22:27 mattapp__ joined #gluster
22:33 mattappe_ joined #gluster
22:42 badone joined #gluster
22:44 FarbrorLeon joined #gluster
23:49 badone joined #gluster
23:50 mattapp__ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary