Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:04 topshare joined #gluster
01:07 topshare_ joined #gluster
01:09 diegows joined #gluster
01:30 vimal joined #gluster
01:36 topshare joined #gluster
01:38 plarsen joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:51 topshare joined #gluster
01:55 topshare_ joined #gluster
02:13 haomaiwa_ joined #gluster
02:17 m0zes joined #gluster
02:19 ir8 joined #gluster
02:19 ir8 Anyone around right now?
02:20 ir8 I have a question about reads and writes from a glusterfs using apache.
02:20 ir8 Is there a something I need to tweak or apache module I need to add?
02:21 ir8 2686 root      20   0  313m  66m 2752 S  47.5  1.7   6:06.69 glusterfs
02:22 ir8 is there caching I need to enable?
02:23 sputnik13 joined #gluster
02:27 topshare_ How to use SSD cache to impove GlusterFS performance?
02:28 topshare_ FlashCache is not good for GlusterFS
02:35 kshlm joined #gluster
02:36 topshare joined #gluster
02:44 bharata-rao joined #gluster
03:03 haomaiw__ joined #gluster
03:11 harish_ joined #gluster
03:12 bala joined #gluster
03:19 nbalachandran joined #gluster
03:27 rejy joined #gluster
03:42 necrogami joined #gluster
03:46 sputnik13 joined #gluster
03:49 sputnik13 joined #gluster
03:51 sputnik13 joined #gluster
03:51 bala joined #gluster
03:55 itisravi joined #gluster
04:06 kanagaraj joined #gluster
04:07 sputnik13 joined #gluster
04:08 RameshN joined #gluster
04:10 kshlm joined #gluster
04:17 glusterbot New news from newglusterbugs: [Bug 1129939] NetBSD port <https://bugzilla.redhat.co​m/show_bug.cgi?id=1129939>
04:24 shubhendu_ joined #gluster
04:24 prasanth_ joined #gluster
04:24 nishanth joined #gluster
04:30 spandit joined #gluster
04:43 atinmu joined #gluster
04:44 ndarshan joined #gluster
04:47 kdhananjay joined #gluster
04:49 anoopcs joined #gluster
04:50 bala joined #gluster
04:52 Rafi_kc joined #gluster
04:54 lalatenduM joined #gluster
05:00 atalur joined #gluster
05:02 hagarth joined #gluster
05:09 nbalachandran joined #gluster
05:12 kumar joined #gluster
05:15 ir8 joined #gluster
05:17 topshare joined #gluster
05:21 spandit joined #gluster
05:26 meghanam joined #gluster
05:26 JoeJulian @later tell _dist The, "Well there's your problem..." cliche was mostly tongue in cheek, a la Mythbusters. When you're next online, lay out the problem and I'll take a look when I can. I'm on vacation and my wife seems to think that means I'm supposed to spend time with her.
05:26 glusterbot JoeJulian: The operation succeeded.
05:26 nshaikh joined #gluster
05:30 raghu` joined #gluster
05:34 topshare joined #gluster
05:45 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
05:46 spandit joined #gluster
05:49 kanagaraj_ joined #gluster
05:50 jiffin joined #gluster
05:56 ndarshan joined #gluster
05:58 pradeepto_ joined #gluster
05:58 pradeepto_ joined #gluster
05:59 bala joined #gluster
06:00 nishanth joined #gluster
06:06 sputnik13 joined #gluster
06:13 karnan joined #gluster
06:21 kanagaraj_ joined #gluster
06:30 ppai joined #gluster
06:35 rastar joined #gluster
06:36 ndarshan joined #gluster
06:44 nishanth joined #gluster
06:51 JoeJulian Thilam|work: Meh, there's not that much to tell. The only use I've actually encountered is keeping a server with 60 bricks from using up all the memory on the box.
06:52 agen7seven joined #gluster
06:54 ron-slc joined #gluster
06:59 latha joined #gluster
06:59 kanagaraj joined #gluster
07:00 Thilam|work hi
07:00 glusterbot Thilam|work: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:00 Thilam|work sorry, was expecting a reply in the pv chat :p
07:00 ekuric joined #gluster
07:00 Thilam|work [16:51] <Thilam|work> I have seen there is 3 parameters : performance.cache-max-file-size, performance.cache-min-file-size and performance.cache-size
07:00 Thilam|work [16:51] <Thilam|work> to set the cache size
07:00 Thilam|work [16:52] <Thilam|work> the two first is about write cache and the last for read cache (as I understand it)
07:00 Thilam|work [16:52] <Thilam|work> I would like to understand how does it work
07:00 Thilam|work [16:53] <Thilam|work> cache uses the RAM of the servers
07:00 Thilam|work [16:54] <Thilam|work> If I put 4GB for performance.cache-size and I have 2 servers, I need 2GB of dedicated memory on each server ?
07:01 Thilam|work that was my yesterday question :p
07:03 nshaikh joined #gluster
07:08 ctria joined #gluster
07:10 deepakcs joined #gluster
07:12 JoeJulian I'm pretty sure that you would need at least 4GB per server.
07:12 JoeJulian Sorry, topical questions I always answer in channel.
07:15 qdk joined #gluster
07:17 kanagaraj joined #gluster
07:18 aravindavk joined #gluster
07:25 bala joined #gluster
07:26 andreask joined #gluster
07:32 hagarth joined #gluster
07:38 Thilam|work JoeJulian: my question is more about the way it works. All cache is take into the RAM off each server?
07:39 Thilam|work A cache size of 4GB is for the pool of servers so it is divided into server number?
07:42 kke is there some nice way to check if a glusterfs mount is healthy?
07:42 Frank77 joined #gluster
07:48 bala joined #gluster
08:03 sage joined #gluster
08:04 kanagaraj joined #gluster
08:05 liquidat joined #gluster
08:07 bala joined #gluster
08:15 aravindavk joined #gluster
08:28 hagarth joined #gluster
08:28 kshlm joined #gluster
08:46 Pupeno joined #gluster
08:52 ctria joined #gluster
08:58 kanagaraj joined #gluster
09:14 agen7seven joined #gluster
09:16 edward1 joined #gluster
09:23 aravindavk joined #gluster
09:26 atalur joined #gluster
09:27 agen7seven left #gluster
09:31 Slashman joined #gluster
09:33 ricky-ti1 joined #gluster
09:33 topshare joined #gluster
09:37 kanagaraj joined #gluster
09:47 meghanam joined #gluster
09:48 glusterbot New news from newglusterbugs: [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117822>
09:49 topshare joined #gluster
09:55 LebedevRI joined #gluster
10:05 calum_ joined #gluster
10:13 bharata_ joined #gluster
10:30 bharata_ joined #gluster
10:36 harish_ joined #gluster
10:36 bharata__ joined #gluster
10:40 spandit joined #gluster
10:40 ctria joined #gluster
10:42 topshare joined #gluster
10:44 aravindavk joined #gluster
10:48 ThatGraemeGuy joined #gluster
10:52 hagarth joined #gluster
11:04 kanagaraj_ joined #gluster
11:08 saurabh joined #gluster
11:09 kanagaraj joined #gluster
11:09 mojibake joined #gluster
11:16 chirino joined #gluster
11:26 kanagaraj joined #gluster
11:31 diegows joined #gluster
11:45 ekuric joined #gluster
11:47 shubhendu_ joined #gluster
11:48 todakure joined #gluster
11:49 meghanam joined #gluster
12:00 Slashman_ joined #gluster
12:03 hagarth joined #gluster
12:17 firemanxbr joined #gluster
12:19 glusterbot New news from newglusterbugs: [Bug 1132465] [FEAT] Trash translator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1132465>
12:19 itisravi_ joined #gluster
12:19 rastar joined #gluster
12:27 meghanam joined #gluster
12:29 foobar joined #gluster
12:33 DV__ joined #gluster
12:34 tty00 joined #gluster
12:35 kanagaraj_ joined #gluster
12:44 ira joined #gluster
12:49 glusterbot New news from newglusterbugs: [Bug 1130023] [RFE] Make I/O stats for a volume available at client-side <https://bugzilla.redhat.co​m/show_bug.cgi?id=1130023>
12:49 RameshN joined #gluster
12:53 LHinson joined #gluster
12:55 LHinson1 joined #gluster
13:03 mariusp joined #gluster
13:03 bene2 joined #gluster
13:04 theron joined #gluster
13:05 julim joined #gluster
13:05 Pupeno_ joined #gluster
13:17 cjhanks joined #gluster
13:17 cjhanks left #gluster
13:18 Rafi_kc humble++
13:18 glusterbot Rafi_kc: humble's karma is now 1
13:19 glusterbot New news from newglusterbugs: [Bug 1132496] Tests execution results, test failures and system hang <https://bugzilla.redhat.co​m/show_bug.cgi?id=1132496>
13:19 mojibake1 joined #gluster
13:27 jobewan joined #gluster
13:31 sputnik13 joined #gluster
13:32 deepakcs joined #gluster
13:34 sas joined #gluster
13:35 bala joined #gluster
13:36 kanagaraj joined #gluster
13:38 sputnik13 joined #gluster
13:46 vikumar joined #gluster
13:49 kevein joined #gluster
13:50 sputnik13 joined #gluster
13:52 diegows joined #gluster
13:56 kanagaraj joined #gluster
13:57 rastar joined #gluster
13:58 kanagaraj_ joined #gluster
13:58 B21956 joined #gluster
14:02 halfinhalfout joined #gluster
14:04 chirino_m joined #gluster
14:05 kanagaraj_ joined #gluster
14:05 wushudoin joined #gluster
14:12 ThatGraemeGuy joined #gluster
14:13 andreask joined #gluster
14:14 ThatGraemeGuy joined #gluster
14:16 mojibake joined #gluster
14:18 spechal joined #gluster
14:18 spechal I seem to have one node in a cluster that the /tmp partition gets filled with tmpXXXXXX files.  Has anyone else experienced this?
14:18 spechal I have about 12 gluster boxes setup in multiple clusters and it’s just this one box.
14:19 gmcwhistler joined #gluster
14:22 tdasilva joined #gluster
14:23 siXy spechal: are you using ext4?
14:23 spechal siXy: yes
14:24 spechal ext2 on the /tmp partition though
14:24 siXy ext2?
14:24 * siXy checks the calender
14:24 spechal lol
14:24 spechal simply done because no one cared about journaling on the /tmp partition
14:26 sputnik13 joined #gluster
14:26 saurabh joined #gluster
14:29 siXy yeah I was gong to say that I had issues with wierd files appearing when I tried to use ext4 instead of xfs as a data mount for gluster, but a) I think that's fixed, and b) I'm assuming that /tmp is not your gluster mount
14:29 siXy so... dunno.
14:32 theron joined #gluster
14:37 sputnik13 joined #gluster
14:39 kanagaraj joined #gluster
14:43 bennyturns joined #gluster
14:45 sputnik13 joined #gluster
15:00 ctria joined #gluster
15:07 sputnik13 joined #gluster
15:10 sputnik13 joined #gluster
15:12 mariusp joined #gluster
15:20 kshlm joined #gluster
15:40 shubhendu_ joined #gluster
15:43 ctria joined #gluster
15:52 _Bryan_ joined #gluster
15:53 sputnik13 joined #gluster
15:54 harish joined #gluster
15:55 theron joined #gluster
15:55 plarsen joined #gluster
15:56 necrogami joined #gluster
15:58 saurabh joined #gluster
16:01 daMaestro joined #gluster
16:04 PeterA joined #gluster
16:04 PeterA how do i deal with those heal-failed?
16:05 PeterA i usually just restart glusterfs-server
16:05 PeterA any better way to do that?
16:14 dastar joined #gluster
16:14 mariusp joined #gluster
16:23 PeterA Anyone here can tell me the best practice to handle heal-failed?
16:24 Frank77 Are your files in a split-brain state ?
16:25 PeterA no
16:25 PeterA just heal-failed
16:25 PeterA and the real file is already gone
16:25 PeterA meaning got renamed or deleted
16:26 zerick joined #gluster
16:26 Frank77 So the file to be healed is supposed to have been deleted during the healing process by a client ?
16:27 PeterA yes
16:27 PeterA i normally just restart glusterfs-server on the node that have heal-failed
16:27 PeterA i wonder if there is a better way to do it
16:28 PeterA and i only have 1 to 2 enteries on heal-failed
16:29 Frank77 Does it try to heal indefinitly if you don't restart it ?
16:29 PeterA how can i tell if it try to heal indefinitly?
16:29 PeterA the brick log only complained once
16:30 Frank77 you should have many more heal-failed entries. I remember having dozens of them because of low disk space.
16:30 PeterA hmm not in my case
16:30 plarsen joined #gluster
16:31 Frank77 if it stops if means that it does not try to heal the file anymore (I suppose but I'm just a gfs user)
16:31 PeterA ic
16:31 Frank77 Is  your file still present on the other bricks?
16:31 PeterA no
16:32 PeterA it's been deleted
16:33 Frank77 So for me everything is alright. Heal-failed entries are just log events. First, your file needed to be healed then the heal process started but one of your client deleted it so I suppose that the heal process has failed and then it stopped here deleting the file on every bricks instead of trying to heal it again.
16:34 PeterA ic
16:34 Frank77 So your question is how to clear the log event ?
16:34 PeterA yes
16:34 PeterA or make sure the volume is clean with heal-failed files
16:35 vimal joined #gluster
16:35 Frank77 You could try to see if there is any entires in the heal event logs "gluster volume myvolume info heal" and compare the timestamp
16:35 PeterA ic
16:36 Frank77 To get the chronology
16:36 Frank77 brb
16:45 madphoenix joined #gluster
16:46 madphoenix hi all, hoping to get some help.  We have a distributed volume, and one of our admins who was trying to be too clever went into the bricks and deleted a folder we no longer needed.  Of course, he didn't understand that this wouldn't actually delete the data because all those files were hardlinked under .glusterfs
16:46 madphoenix So, is there a way to identify the path that a file under .glusterfs used to point to, so we can actually reclaim the space?
16:48 ir8 joined #gluster
16:55 pradeepto_ joined #gluster
16:57 vimal joined #gluster
16:59 LHinson joined #gluster
17:01 theron joined #gluster
17:03 mariusp joined #gluster
17:05 mariusp joined #gluster
17:28 mariusp joined #gluster
17:30 PeterA anyone has experience in disable and renable quota on a volume?
17:30 PeterA i got this when i try to disable quota
17:30 PeterA http://pastie.org/9492109
17:30 glusterbot Title: #9492109 - Pastie (at pastie.org)
17:30 PeterA keep getting these  W [client-rpc-fops.c:1037:client3_3_setxattr_cbk] 0-sas01-client-2: remote operation failed: No data available
17:30 PeterA from the quota-crawl.log
17:32 plarsen joined #gluster
17:33 Pupeno joined #gluster
17:34 nbalachandran joined #gluster
17:37 lmickh joined #gluster
17:48 spechal left #gluster
17:58 _dist joined #gluster
17:58 _dist Frank77: thanks for posting that trace, hopefully they'll have a reasonable reply. For now I'm just going to run on 3.4.2-1 and use getfattr to check heal status
18:03 daMaestro joined #gluster
18:15 diegows joined #gluster
18:31 ir8 Hello all.
18:35 _dist hello
18:35 glusterbot _dist: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:43 _dist JoeJulian: thanks, but just enjoy your vacation :) I know it's hard to take them.
18:45 PeterA may i know what could cause this?
18:45 PeterA http://pastie.org/9492261
18:45 glusterbot Title: #9492261 - Pastie (at pastie.org)
18:45 PeterA 0-sas03-marker: failed to build parent loc of <gfid:c715e994-0a9e-452a-97db-31fcdca89657>/bse
18:51 Philambdo joined #gluster
18:57 talntid joined #gluster
18:57 talntid curious if I understand gluster properly..
18:58 talntid lets say I have 2 servers, one in new york, another in san francisco... and i install glusterfs on them..
18:58 talntid if I write files to new york, it automatically syncs to san fran, and vise versa?
18:58 talntid or is it a single write, many read setup, or.. ?
18:59 _dist talntid: depends, if you do a replicate volume yes each write goes to both at the same time, and won't return from an fsync until both are done
19:00 talntid but it doesn't matter which one is written to?
19:00 talntid i'm familiar with drbd, but this seems slightly different?
19:00 _dist in a replicate volume all bricks are written to simultaneously, if you mount one that only happens at mount time
19:00 _dist there is no slave/master
19:01 _dist but, if a brick is unhealthy (healing) you might end up reading from brick 2, even if you mounted brick 1 (because the server knows where the "good" data is)
19:01 talntid gotcha
19:01 talntid does it auto heal?
19:01 _dist a NW to SF link probably isn't feasible with replication though, it'd be pretty slow
19:02 talntid even for website data?
19:02 _dist it does auto-heal, it's called a "crawl" and by default runs every 10 min there's a daemon called a "self heal daemon" that does it
19:02 talntid gotcha
19:02 talntid is there a big difference between glusterfs and drbd?
19:03 _dist yeap, with gluster you can have distribute or replicate volumes (sort of like network raid), and you can have as many "bricks" (data locations) as you like
19:05 _dist we use it for file shares and vm storage at my company. But for us it's not a geolocation cache scenario (like yours sounds like) it's an HA strategy
19:05 talntid I gotcha
19:05 _dist glsuter does also have geo-locate replicate volumes, but that means recent data will be behind at your "slave" site, and that site would be reading from the "master" probably not local I assume
19:16 _dist talntid: btw, I have experience with replicate volumes, but I've never done a geo-replicate one so what I know about that is only read, not experience. The other stuff I've done thought
19:16 _dist though*
19:20 julim joined #gluster
19:20 kmai007 joined #gluster
19:20 kmai007 joined #gluster
19:21 rotbeard joined #gluster
19:21 kmai007 friends, im on gluster3.4.2-1 and have a 4-node gluster -> add 2 more nodes for 6.  Started the rebalance process.  What can I look at to see how things are going?
19:22 kmai007 gluster volume status only says "in progress"
19:22 Frank77 _dist : You're welcome. I didn't have much time to match this traces to the perl code. But this is a beginning for anyone who tries to find what's going on with qm and gfs 3.5.2
19:22 kmai007 i suppose i'll tail the /var/log/glusterfs/rebalance.log
19:23 kmai007 buts its moving fast, fast, fast, hard to keep up.......
19:23 Frank77 you could try gluster volume rebalance myvol status
19:24 kmai007 hi-five, its been so long, thanks Frank77
19:24 daMaestro joined #gluster
19:25 kmai007 can some1 tell me what the "skipped" column means
19:25 kmai007 from the rebalance status output
19:29 halfinhalfout1 joined #gluster
19:29 Frank77 Can you see anything related to those skipped files in your logs ?
19:33 kmai007 Frank77: i see what it says in the log but not quite sure what it means
19:33 kmai007 http://fpaste.org/127506/49553140/
19:33 glusterbot Title: #127506 Fedora Project Pastebin (at fpaste.org)
19:34 kmai007 i'll probably ahve to wait till it finishes and spot check each node to see if it did it correctly
19:35 dberry joined #gluster
19:37 Frank77 I can't figure out what are exactly skipped files but there seems to be a workaround for this with the "force" option. But I don't know what does it mean exactly. Are your files in use ? (disk vm's etc.)
19:38 kmai007 no VMs
19:38 kmai007 just content storage
19:38 Frank77 k
19:38 kmai007 its not in use
19:40 andreask joined #gluster
19:41 kmai007 i must have done something wrong
19:41 kmai007 when i expanded the volume, was I suppose to do it 1 node at a time
19:41 kmai007 http://gluster.org/community/documentation​/index.php/Gluster_3.2:_Expanding_Volumes
19:41 glusterbot Title: Gluster 3.2: Expanding Volumes - GlusterDocumentation (at gluster.org)
19:41 kmai007 i doubled up and did
19:42 kmai007 gluster volume add-brick devstatic host5:/here/here host6:/here/here
19:42 kmai007 looks like now i have the same data on doing the rebalance on host5/host65
19:42 kmai007 host6*
19:42 Frank77 what kind of volume is it ?
19:42 kmai007 distr-rep
19:43 kmai007 i think i did
19:43 kmai007 well, now i know
19:44 plarsen joined #gluster
19:44 Frank77 so you might have expanded it taking into account the number of current bricks
19:44 Frank77 -> " For example, to expand a distributed replicated volume with a replica count of 2, you need to add bricks in multiples of 2 (such as 4, 6, 8, etc.). "
19:45 kmai007 saw that
19:45 kmai007 i think i did it wrong
19:45 kmai007 obviously
19:45 kmai007 the example is for replication
19:45 Frank77 :s
19:46 kmai007 i should probably stop the rebalance
19:46 Frank77 I don't know. What's your config. and how did you expand it exacly ?
19:46 kmai007 this is my volume info
19:47 kmai007 http://fpaste.org/127515/86504191/
19:47 glusterbot Title: #127515 Fedora Project Pastebin (at fpaste.org)
19:47 kmai007 so i want bricks 1,3,5 to have the same data
19:47 kmai007 and then i want 2,4,6 to have the other half
19:48 kmai007 right now it appears i have 5,6 to have the same data as 1
19:48 Frank77 are you sure this is not just hard links ? Let's check it with "ls -lnhs"
19:49 kmai007 yep, its the same
19:50 kmai007 no  hardlinks or DHT pointer
19:51 kmai007 thast what i did wrong
19:51 kmai007 so on the redhat doc. it says i have to specify the replica count when i add-brick
19:52 kmai007 gluster volume add-brick replica 3 test-volume server4:/exp4
19:52 Frank77 It might not have finished copying data. I don't exactly when it is supposed to drop moved files. I'd stay right after the copy.
19:52 kmai007 oh true
19:52 kmai007 this is my first go around at rebalancing
19:53 kmai007 imma start over, and detach the peer
19:53 Frank77 it's up to you.
19:54 kmai007 remove-brick works well
19:55 Frank77 Did you lose any data ?
19:55 kmai007 probably
19:56 kmai007 b/c of my rebalance
19:56 kmai007 it warned me that i would
19:56 kmai007 its my R&D area
19:56 Frank77 k
19:57 kmai007 Frank77: so do start "over"
19:57 kmai007 i've removed the bricks,
19:57 kmai007 i should blow away what is in the mounted xfs filesystem of those bricks
19:57 kmai007 clear the gettar attributes?
19:58 Frank77 yes you need to erase all data (even hidden data such as the .glusterfs directory) or gluster won't let you to add new bricks
19:58 kmai007 i thinke the safe bet is to follow joe's blog
19:58 kmai007 http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
19:58 glusterbot Title: GlusterFS: {path} or a prefix of it is already part of a volume (at joejulian.name)
19:59 Frank77 you might just need to delete all files/directories including hidden ones
19:59 kmai007 k
20:01 halfinhalfout joined #gluster
20:03 jobewan joined #gluster
20:06 halfinhalfout1 joined #gluster
20:06 halfinhalfout1 left #gluster
20:10 mariusp joined #gluster
20:11 kmai007 JoeJulian: are you here?
20:14 torbjorn__ joined #gluster
21:20 ThatGraemeGuy joined #gluster
21:24 tru_tru joined #gluster
21:27 plarsen joined #gluster
21:43 agen7seven joined #gluster
21:44 agen7seven left #gluster
21:45 theron joined #gluster
22:03 bennyturns joined #gluster
22:17 Pupeno_ joined #gluster
22:23 Ark_ joined #gluster
22:43 Pupeno joined #gluster
22:52 nated joined #gluster
22:54 nated Hi,  I'm trying to test gluster 3.5 geo-rep on Ubuntu 12.04 using the packages provided via ppa by semiosis
22:54 nated I;m getting an error about missing hook s56glusterd-geo-rep-create-post.sh
22:55 nated http://pastebin.com/NWAw2ZYS
22:55 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
22:57 nated http://fpaste.org/127581/40866181/
22:57 glusterbot Title: #127581 Fedora Project Pastebin (at fpaste.org)
22:58 nated I'm wondering if the hook scripts are not packaged?  They aren't in any of the debs that I can tell
23:40 nated semiosis: it looks liek the 3.5 ppa packages are missing a number of geo-replication files, comapring installed files to the rpm spec in git
23:41 marcoceppi joined #gluster
23:41 marcoceppi joined #gluster
23:41 nated semiosis: where can I file a bug?  Red Hat Bugzilla?
23:41 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary