Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 DV joined #gluster
00:06 robo joined #gluster
00:20 sroy joined #gluster
00:30 pdrakewe_ joined #gluster
00:40 DV joined #gluster
00:55 DV joined #gluster
01:01 gdubreui joined #gluster
01:16 pdrakeweb joined #gluster
01:30 cjanbanan joined #gluster
01:37 Psi-Jack_ joined #gluster
01:49 tokik joined #gluster
02:18 haomaiwang joined #gluster
02:31 haomaiw__ joined #gluster
02:58 harish joined #gluster
03:01 nightwalk joined #gluster
03:10 lpabon joined #gluster
03:18 vpshastry joined #gluster
03:30 kanagaraj joined #gluster
04:02 mohankumar joined #gluster
04:03 shubhendu joined #gluster
04:07 spandit joined #gluster
04:10 glusterbot New news from newglusterbugs: [Bug 1066778] Make AFR changelog attributes persistent and independent of brick position <https://bugzilla.redhat.com/show_bug.cgi?id=1066778>
04:11 itisravi joined #gluster
04:12 hagarth joined #gluster
04:13 ndarshan joined #gluster
04:20 mohankumar joined #gluster
04:29 ppai joined #gluster
04:36 sks joined #gluster
04:40 mohankumar joined #gluster
04:42 kdhananjay joined #gluster
04:51 prasanth_ joined #gluster
04:53 saurabh joined #gluster
04:53 cyber_si joined #gluster
04:58 RameshN joined #gluster
04:58 ndarshan joined #gluster
05:00 lalatenduM joined #gluster
05:00 FarbrorLeon joined #gluster
05:01 deepakcs joined #gluster
05:13 bala joined #gluster
05:18 prasanth_ joined #gluster
05:25 bala joined #gluster
05:26 ndarshan joined #gluster
05:32 vpshastry1 joined #gluster
05:36 vpshastry2 joined #gluster
05:42 cyber_si left #gluster
05:42 cyber_si joined #gluster
05:49 cyber_si joined #gluster
05:53 Frankl joined #gluster
05:57 prasanth_ joined #gluster
06:04 rahulcs joined #gluster
06:04 rahulcs joined #gluster
06:23 vimal joined #gluster
06:24 Philambdo joined #gluster
06:24 ngoswami joined #gluster
06:37 FarbrorLeon joined #gluster
06:39 23LAAVAUX joined #gluster
06:39 64MAACNFS joined #gluster
06:40 cjanbanan joined #gluster
06:43 prasanthp joined #gluster
06:44 JonathanD joined #gluster
06:51 latha joined #gluster
06:54 sks joined #gluster
06:56 Bplusplus joined #gluster
07:00 YazzY joined #gluster
07:01 vpshastry1 joined #gluster
07:05 raghu joined #gluster
07:11 benjamin_____ joined #gluster
07:24 jtux joined #gluster
07:33 Mattlantis joined #gluster
07:58 ricky-ticky1 joined #gluster
08:06 ctria joined #gluster
08:14 tjikkun_work joined #gluster
08:14 keytab joined #gluster
08:17 slayer192 joined #gluster
08:20 Elico1 joined #gluster
08:22 social joined #gluster
08:22 meghanam_ joined #gluster
08:23 RameshN_ joined #gluster
08:23 cjanbanan joined #gluster
08:23 zorgan_ joined #gluster
08:24 badone joined #gluster
08:27 andreask joined #gluster
08:28 duerF^ joined #gluster
08:28 junaid joined #gluster
08:30 junaid joined #gluster
08:35 franc joined #gluster
08:36 Bplusplus joined #gluster
08:41 sks joined #gluster
08:46 fsimonce joined #gluster
08:47 X3NQ joined #gluster
08:47 dusmant joined #gluster
08:50 Norky joined #gluster
08:54 lalatenduM joined #gluster
08:58 vkoppad joined #gluster
08:59 meghanam_ joined #gluster
08:59 meghanam__ joined #gluster
09:07 lalatenduM joined #gluster
09:07 liquidat joined #gluster
09:21 spandit joined #gluster
09:23 lalatenduM joined #gluster
09:25 Slash joined #gluster
09:26 ProT-0-TypE joined #gluster
09:30 badone joined #gluster
09:32 badone joined #gluster
09:34 badone joined #gluster
09:47 Pavid7 joined #gluster
09:51 spandit joined #gluster
09:57 raptorman joined #gluster
10:05 slayer192 joined #gluster
10:12 johnmwilliams__ joined #gluster
10:13 dusmant joined #gluster
10:16 fyxim_ joined #gluster
10:18 slayer192 joined #gluster
10:19 jbustos joined #gluster
10:20 shubhendu joined #gluster
10:20 ndarshan joined #gluster
10:23 bala joined #gluster
10:26 kanagaraj joined #gluster
10:27 RameshN_ joined #gluster
10:36 Elico joined #gluster
10:56 davinder joined #gluster
11:02 slayer192 joined #gluster
11:02 aravindavk joined #gluster
11:05 slayer192 joined #gluster
11:08 diegows joined #gluster
11:20 meghanam__ joined #gluster
11:20 meghanam_ joined #gluster
11:28 bala joined #gluster
11:40 Pavid7 joined #gluster
11:43 shubhendu joined #gluster
11:44 kanagaraj joined #gluster
11:44 ndarshan joined #gluster
11:50 dusmant joined #gluster
11:54 prasanthp joined #gluster
11:58 bfoster joined #gluster
11:58 Pavid7 joined #gluster
12:05 itisravi_ joined #gluster
12:08 Elico1 joined #gluster
12:14 gmcwhistler joined #gluster
12:14 glusterbot New news from newglusterbugs: [Bug 1049981] 3.5.0 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1049981>
12:15 ctria joined #gluster
12:18 Ark joined #gluster
12:32 aravindavk joined #gluster
12:35 RameshN_ joined #gluster
12:50 B21956 joined #gluster
12:50 japuzzo joined #gluster
12:56 tdasilva joined #gluster
13:02 Guest61551 joined #gluster
13:11 chirino joined #gluster
13:13 benjamin_____ joined #gluster
13:18 ctria joined #gluster
13:19 bennyturns joined #gluster
13:20 dusmant joined #gluster
13:20 robo joined #gluster
13:22 edward1 joined #gluster
13:28 jtux joined #gluster
13:29 theron joined #gluster
13:33 rwheeler joined #gluster
13:41 RayS joined #gluster
13:43 robo joined #gluster
13:44 kaptk2 joined #gluster
14:01 jtux joined #gluster
14:01 robos joined #gluster
14:05 X3NQ joined #gluster
14:13 rpowell joined #gluster
14:18 seapasulli joined #gluster
14:22 Pavid7 joined #gluster
14:22 YazzY joined #gluster
14:25 Elico joined #gluster
14:26 criticalhammer joined #gluster
14:27 hagarth joined #gluster
14:29 calum_ joined #gluster
14:31 criticalhammer Is it common for the .glusterfs directory to become cluttered with folders and take up a few 100 megs of space?
14:31 criticalhammer is there a need to clean up the .glusterfs directory in each volume?
14:32 rahulcs joined #gluster
14:35 lalatenduM criticalhammer, yup it is fine, plz dont clean it
14:35 lalatenduM criticalhammer, it has glusterfs gfid information
14:36 jobewan joined #gluster
14:37 ndk joined #gluster
14:38 criticalhammer good to know, thanks lalatenduM
14:39 rahulcs joined #gluster
14:44 Pavid7 joined #gluster
14:51 sroy joined #gluster
14:55 lmickh joined #gluster
14:56 Guest61551 joined #gluster
14:58 wrale_ joined #gluster
14:59 failshell joined #gluster
15:01 ira joined #gluster
15:01 benjamin_____ joined #gluster
15:04 alan113696 joined #gluster
15:05 FarbrorLeon joined #gluster
15:05 chirino joined #gluster
15:07 Pavid7 joined #gluster
15:12 jbrooks joined #gluster
15:19 Slash joined #gluster
15:25 harish_ joined #gluster
15:27 jrcresawn_ semiosis: Thanks for your advise on Friday.
15:28 robo joined #gluster
15:28 rahulcs joined #gluster
15:34 andreask joined #gluster
15:43 tokik joined #gluster
15:46 azmeuk joined #gluster
15:47 tokik joined #gluster
15:51 tokik_ joined #gluster
15:53 jag3773 joined #gluster
15:53 vpshastry joined #gluster
15:53 criticalhammer So to add to my question earlier, when I do a du on a volume the used amount seem to be off. I've noticed over time that a volume will begin filling up for no reason. Is this common?
15:54 criticalhammer also when I do a df -h
15:55 criticalhammer clutter seems to be an issue with global file systems, and gluster doesnt seem immune.
16:00 aravindavk joined #gluster
16:15 FarbrorLeon joined #gluster
16:19 azmeuk Hi, I have some dumb questions about GlusterFS. Documentation encourages to ask on Q&A forums and IRC, so I do both :)
16:19 azmeuk My first question is "Can GlusterFS notifies after synchronizations?" http://stackoverflow.com/questions/22459192/can-glusterfs-notifies-after-synchronizations
16:19 azmeuk My second one is "Can GlusterFS clients synchronize subdirectories" http://stackoverflow.com/questions/22459717/can-glusterfs-clients-synchronize-subdirectories
16:19 azmeuk Thank you for your help!
16:19 glusterbot Title: filesystems - Can GlusterFS notifies after synchronizations? - Stack Overflow (at stackoverflow.com)
16:19 glusterbot Title: synchronization - Can GlusterFS clients synchronize subdirectories - Stack Overflow (at stackoverflow.com)
16:23 rahulcs joined #gluster
16:26 semiosis azmeuk: answered, http://stackoverflow.com/a/22459874/3429709
16:27 Mo__ joined #gluster
16:29 hagarth joined #gluster
16:30 azmeuk Thank you.
16:31 semiosis yw
16:37 semiosis azmeuk: i'd prever chatting here
16:37 semiosis rather than stackexchange
16:37 azmeuk Ok
16:38 azmeuk Well, let's say you copy a large file on a GlusterFS filesystem, you have some delay before your file is sent to the server ?
16:39 hagarth1 joined #gluster
16:39 azmeuk Do you mean there is no local cache of the file ?
16:39 semiosis no
16:39 semiosis correct, no local cache
16:39 semiosis the client mount point does all operations over the network
16:40 azmeuk I see.
16:40 azmeuk GlusterFS might not be what I need then.
16:42 ndevos azmeuk: I think coda has a very disconnected mode, maybe that is more suitable for your use-case?
16:42 azmeuk I will look at this, thanks.
16:47 ndk joined #gluster
16:49 zerick joined #gluster
17:00 rahulcs joined #gluster
17:06 chirino joined #gluster
17:07 rpowell1 joined #gluster
17:18 sroy joined #gluster
17:21 vpshastry joined #gluster
17:29 kkeithley1 joined #gluster
17:35 rwheeler joined #gluster
17:37 hybrid512 joined #gluster
17:38 hybrid512 joined #gluster
17:42 vpshastry left #gluster
17:42 sputnik13 joined #gluster
17:51 robo joined #gluster
17:59 lalatenduM joined #gluster
18:06 vpshastry joined #gluster
18:11 robo joined #gluster
18:13 vpshastry left #gluster
18:20 jbustos joined #gluster
18:22 jag3773 joined #gluster
18:26 alan113696 I want to run compute jobs on gluster nodes where files are on local - is there an API to map a file in global namespace to a gluster node machine name?
18:29 Elico left #gluster
18:35 rahulcs joined #gluster
18:38 chirino joined #gluster
18:39 semiosis @path
18:39 glusterbot semiosis: I do not know about 'path', but I do know about these similar topics: 'path or prefix', 'path-or-prefix', 'pathinfo'
18:39 semiosis alan113696: ,,(pathinfo)
18:39 glusterbot alan113696: find out which brick holds a file with this command on the client mount point: getfattr -d -e text -n trusted.glusterfs.pathinfo /client/mount/path/to.file
18:40 sprachgenerator joined #gluster
18:51 rahulcs joined #gluster
18:52 abyss^ joined #gluster
18:55 alan113696 thanks
19:00 diegows joined #gluster
19:04 zerick joined #gluster
19:09 chirino joined #gluster
19:12 rahulcs joined #gluster
19:17 rotbeard joined #gluster
19:18 robo joined #gluster
19:23 y4m4 joined #gluster
19:25 lalatenduM joined #gluster
19:25 SFLimey joined #gluster
19:26 SFLimey How do I change the number of replica's for a gluster vol? I have four peers and want to change my replica count from 2 to 4.
19:26 semiosis SFLimey: using the add-brick replica N syntax
19:27 SFLimey Do I actually have to add another brick?
19:27 semiosis yes, two in fact, if you're increasing replica count from 2 to 4
19:28 SFLimey Well I already have four peers in the gluster cluster, but its currently set to 2 replica's
19:28 semiosis what are you trying to do?
19:28 SFLimey Basically want the data replicated too all four nodes.
19:29 semiosis then yes, you need to change the replica count & provide the two new brick addresses for the two new servers
19:30 SFLimey So to have four replicas I need to have six servers? Sorry a little confused.
19:30 semiosis no
19:30 semiosis please ,,(pasteinfo)
19:30 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:31 SFLimey http://ur1.ca/gvez5
19:31 glusterbot Title: #86163 Fedora Project Pastebin (at ur1.ca)
19:31 kkeithley_ he's got a four server 2x2 setup that he wants to change to a four server replica 4 setup
19:32 robos joined #gluster
19:32 SFLimey Yeah, so basically all the data on all four servers.
19:32 semiosis ah thats kind of a pickle
19:32 semiosis right now you have 50% of data on servers 1&2, the other half on servers 3&4
19:33 SFLimey Right, so for AWS I'm having to snapshot multiple volumes to get a full backup. Space isnt an issue, so I'm not trying to save money.
19:33 zaitcev joined #gluster
19:34 SFLimey I thought changing the replicas to 4 would accomplish that.
19:34 semiosis well, there's (at least) two ways to do it
19:37 semiosis you might just want to create a whole new volume with replica 4 to begin with and copy the data from old vol to new vol
19:37 semiosis that's probably the overall easiest & safest way
19:41 nage joined #gluster
19:41 nage joined #gluster
19:44 DV joined #gluster
19:50 robo joined #gluster
19:55 robo joined #gluster
20:08 _dist joined #gluster
20:12 Pavid7 joined #gluster
20:19 robo joined #gluster
20:27 robo joined #gluster
20:29 tdasilva left #gluster
20:29 DV joined #gluster
20:31 zerick joined #gluster
20:36 lmickh joined #gluster
20:36 coredump joined #gluster
20:37 coredump_ joined #gluster
20:37 coredump_ number of bricks is not a multiple of replica count
20:38 coredump_ I have 5 servers and trying to use a replica 2
20:38 coredump_ that's not going to work, apparently. What are my options?
20:39 semiosis just use 4 of the servers
20:39 semiosis best option imho
20:41 Pavid7 joined #gluster
20:42 jiffe98 alright, I tried upgrading one of my nodes from 3.3.1 to 3.4.2 and it messed something up
20:42 coredump_ I am thinking on having multiple bricks and making each of these use 4 servers. Like 1 brick on 1-4, another on 2-5 and another on 1,3,4,5
20:42 coredump_ semiosis: ^
20:43 kkeithley_ you could create two bricks on each server and then do "volume chaining"
20:45 semiosis coredump_: do whatever you want, but my advice is always have number of servers a multiple of your replica count
20:45 jiffe98 its a 2x2 setup, I upgraded node 4, node 1 and 2 which are replicas seem ok but they don't see the other side, node 3 sees only itself and node 4 which I upgraded sees no volumes
20:45 semiosis as i've said before, number of bricks a multiple of replica count is a requirement, number of *servers* a multiple of replica count is a best practice
20:46 semiosis (although not strictly required)
20:46 semiosis jiffe98: see ,,(3.3 upgrade notes) upgrading from pre-3.3 to 3.3 or later requires downtime iirc
20:46 glusterbot jiffe98: http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
20:47 coredump_ if I do like I suggested, can I add a new server later and expand the replicas to it?
20:47 semiosis idk, can you? :)
20:47 jiffe98 semiosis: that's upgrading to 3.3, I'm upgrading from 3.3 to 3.4, I'm following http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
20:48 semiosis jiffe98: oops, i saw 3.3.1 as 3.1
20:48 coredump_ I guess I will just have to test :)
20:48 coredump_ kkeithley_: what's this chaining thing that I can't find on the docs
20:49 coredump left #gluster
20:50 semiosis jiffe98: the ,,(ports) might have changed, check your iptables
20:50 glusterbot jiffe98: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:50 semiosis indeed, ports changed with 3.4
20:51 kkeithley_ replica pairs  s1b2:s2b1,  s2b2:s3b1,  s3b2:s4b1,  s4b2:s5b1,  s5b2:s1b1.
20:51 jiffe98 no firewalls on these machines, the firewall sits outside this network
20:52 jiffe98 my biggest concern right now is my cluster seems incomplete, half of it is missing
20:53 semiosis jiffe98: is this "node" only a server or is it a client (too)?
20:54 semiosis should always upgrade all servers before any clients
20:54 jiffe98 semiosis: only a server, haven't touched the clients yet
20:55 semiosis what do you mean half is missing?  what output is different from what you expect?
20:55 jiffe98 my view of this is via the gluster client, volume info shows the right config on node 1-3 but node 1,2 only sees 1,2 connected and node 3 only sees node 3 connected
20:56 jiffe98 volume status only shows 1,2 connected on 1,2 and 3 connected on node 3
20:56 semiosis idk what you mean "only sees" please pastie command outputs and/or logs
20:56 jiffe98 yeah, one sec
20:56 Elico joined #gluster
20:57 badone joined #gluster
20:58 theron joined #gluster
20:59 robo joined #gluster
20:59 jiffe98 http://nsab.us/public/gluster
21:00 jiffe98 the top one is the output as seen on node 1 and 2, the bottom one is the output as seen on node 3
21:03 semiosis if you make a new client mount, does it connect to all bricks successfully?
21:03 semiosis i hope it's just an issue with the volume status command
21:04 semiosis could also try restarting glusterd on all four servers
21:05 theron joined #gluster
21:06 jiffe98 how do I tell if the client is connected to all 3?
21:06 jiffe98 4 is the upgraded one and seems to be the catalyst for the current issues so its down
21:07 [o__o] left #gluster
21:10 [o__o] joined #gluster
21:11 andreask joined #gluster
21:11 jiffe98 files that only exist on node3 appear in the client so I'm guessing it is connecting to it
21:12 chirino joined #gluster
21:13 Matthaeus joined #gluster
21:14 jiffe98 so here's something interesting
21:14 jiffe98 when I start glusterd on node4, node3 then says 'Unable to obtain volume status information.' when I try to do a volume status
21:15 jiffe98 and node4 sees no volumes
21:15 semiosis jiffe98: client log will show connection attempts & success or failure for all bricks in the volume, no need to guess
21:19 jiffe98 yeah it connected to all 3
21:23 jiffe98 I upgraded another cluster from 3.3.1 to 3.4.1 a while back and didn't run into issues so not sure what's going on with node4 here
21:24 jiffe98 the other gluster wasn't setup the same way though
21:25 jiffe98 I configured this cluster with --sysconfdir=/local/gluster/etc --localstatedir=/local/gluster/var in 3.3.1
21:33 yinyin joined #gluster
21:34 FarbrorLeon joined #gluster
21:37 semiosis hmm then perhaps you need to move those files/dirs to new locations, probably /etc/... & /var/...
21:39 jiffe98 yeah seems something changed, moving /local/gluster/var/lib/glusterd/* to /var/lib/glusterd starts it up now without things going down
21:39 jiffe98 volume status still looks funky but it works
21:39 rwheeler joined #gluster
21:42 semiosis glad you pointed out that you relocated stuff, i'd never have guessed!
21:44 jiffe98 yeah that is an antiquated setup from when we were messing with network boot anyway so no need to have it that way
21:46 fidevo joined #gluster
21:51 zerick joined #gluster
22:06 diegows joined #gluster
22:13 qdk joined #gluster
22:14 chirino joined #gluster
22:25 kkeithley joined #gluster
22:25 bazzles joined #gluster
22:32 jrcresawn joined #gluster
22:35 yinyin joined #gluster
22:39 robo joined #gluster
22:48 kam270 joined #gluster
22:52 zerick joined #gluster
23:01 kam270 joined #gluster
23:01 badone joined #gluster
23:06 lmickh joined #gluster
23:06 zerick joined #gluster
23:09 kam270 joined #gluster
23:17 jag3773 joined #gluster
23:17 kam270 joined #gluster
23:18 Faed joined #gluster
23:25 kam270 joined #gluster
23:27 gdubreui joined #gluster
23:33 kam270 joined #gluster
23:36 zerick joined #gluster
23:41 kam270 joined #gluster
23:45 Ark joined #gluster
23:49 kam270 joined #gluster
23:57 kam270 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary