Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 davinder joined #gluster
00:10 _pol_ joined #gluster
00:20 wolfador joined #gluster
00:20 KyleG1 joined #gluster
00:20 LessSeen_ is geo-replication deprecated in 3.4?
00:21 JoeJulian Not supposed to be...
00:22 LessSeen_ ok just checking because i got this error: unrecognized word: geo-replication (position 1)
00:22 wolfador possible to mount a gluster volume on client with "usrquota" option? not working for me
00:23 JoeJulian LessSeen_: Did you install glusterfs-geo-replication (assuming rpm based install)?
00:24 LessSeen_ ahh probably not
00:24 JoeJulian wolfador: No. Gluster has its own quota management.
00:25 wolfador JoeJulian: interesting, trying to use it for an OpenShift node and that requires usrquota on the mount point
00:27 LessSeen_ thanks btw!
00:27 JoeJulian You could probably edit mount.glusterfs to ignore the option.
00:27 JoeJulian LessSeen_: you're welcome. :)
00:27 JoeJulian wolfador: If it works (ignoring the option), you should then file a bug report. :D
00:27 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
00:31 wolfador Thanks, I will poke around there, to see if I can get it to work, I got it to now accept the option but openshift doesn't recognize it as being turned on
00:31 JoeJulian :/
00:42 tdasilva left #gluster
00:42 mattapperson joined #gluster
01:02 andreask joined #gluster
01:04 tokikura joined #gluster
01:07 sprachgenerator joined #gluster
01:08 bala joined #gluster
01:16 kdhananjay joined #gluster
01:24 tokikura joined #gluster
01:45 wica joined #gluster
01:46 tokik joined #gluster
01:48 jmarley joined #gluster
01:50 khushildep_ joined #gluster
01:52 [o__o] left #gluster
01:55 [o__o] joined #gluster
02:16 harish joined #gluster
02:18 m0zes joined #gluster
02:57 zapotah joined #gluster
02:57 zapotah joined #gluster
03:01 bharata-rao joined #gluster
03:11 leochill joined #gluster
03:28 shubhendu joined #gluster
03:30 chirino joined #gluster
03:32 kdhananjay joined #gluster
03:36 vpshastry joined #gluster
03:37 RameshN joined #gluster
03:41 johnmark joined #gluster
03:44 itisravi joined #gluster
03:49 vpshastry left #gluster
03:51 davinder joined #gluster
04:05 prasanth joined #gluster
04:23 shyam joined #gluster
04:26 shylesh joined #gluster
04:32 crazifyngers joined #gluster
04:32 crazifyngers joined #gluster
04:36 kaushal_ joined #gluster
04:45 spandit joined #gluster
04:47 rjoseph joined #gluster
04:50 satheesh1 joined #gluster
04:50 ricky-ti1 joined #gluster
04:51 lalatenduM joined #gluster
04:59 dusmant joined #gluster
05:09 gdubreui joined #gluster
05:09 crazifyngers joined #gluster
05:10 ppai joined #gluster
05:11 nshaikh joined #gluster
05:23 mohankumar__ joined #gluster
05:25 ndarshan joined #gluster
05:25 lalatenduM joined #gluster
05:31 vpshastry joined #gluster
05:38 dusmant joined #gluster
05:42 ricky-ticky1 joined #gluster
05:44 bala joined #gluster
05:45 kanagaraj joined #gluster
05:45 aravindavk joined #gluster
05:50 askb_ joined #gluster
05:51 askb_ joined #gluster
05:54 askb_ joined #gluster
05:57 benjamin__ joined #gluster
06:00 dusmant joined #gluster
06:04 hchiramm_ joined #gluster
06:07 hagarth joined #gluster
06:13 CheRi joined #gluster
06:25 rjoseph joined #gluster
06:26 avati joined #gluster
06:26 hchiramm_ joined #gluster
06:26 a2_ joined #gluster
06:30 a2 joined #gluster
06:46 hchiramm_ joined #gluster
06:47 yosafbridge joined #gluster
06:47 bala joined #gluster
06:48 leochill joined #gluster
06:50 vimal joined #gluster
06:51 davinder joined #gluster
07:05 EWDurbin joined #gluster
07:05 EWDurbin howdy!
07:05 EWDurbin PyPI was deployed over the weekend with GlusterFS acting as the shared file storage for packages and documentation between the web nodes
07:06 EWDurbin it's been performing splendidly save for one minor annoyance.
07:06 raghu joined #gluster
07:06 EWDurbin we're trying to keep incremental backups of the packages and docs at as high of a frequency as possible, currently about an hour
07:06 EWDurbin we're using an rsync based backup utility which rsyncs the mounted volumes to a backup server
07:07 EWDurbin unfortunately this job is taking ~40 minutes to backup 75G
07:07 EWDurbin i'm curious if there are any tips for eeking out a bit more performance
07:07 EWDurbin we have a two-node replica set running gluster 3.3.2 on CentOS 6
07:08 EWDurbin nodes have 2G ram and 200G of "cloud" sata storage
07:09 psharma joined #gluster
07:09 T0aD maybe to optimize the rsync
07:09 EWDurbin directory stucture is very hierarchical
07:09 EWDurbin T0aD: tool is rdiff-backup
07:09 T0aD i have 12,000 sites down here, i never rsync the whole thing
07:09 EWDurbin which uses librsync
07:09 T0aD yeah im using rdiff-backup as well actually
07:10 EWDurbin :)
07:10 T0aD i made a lil php wrapper http://rbackup.lescigales.org to do it actually :P
07:10 glusterbot Title: rBackup - rdiff-backup + rsnapshot = rbackup ! (at rbackup.lescigales.org)
07:10 EWDurbin rsnapshot was also considered, but i prefer my nodes to push to the backup server
07:10 T0aD but the thing is, i never rsync the /
07:11 T0aD i detect ctime changes then mark for backup
07:11 T0aD ill probably drop rdiff-backup as well
07:11 T0aD maybe instead of rsync you could check out georeplication
07:12 T0aD thats what people here suggested to me for backup purposes
07:12 EWDurbin i looked into it, but we have a geographic replica via standared PyPI mirroring infrastrucutre taht syncs once a minute
07:12 EWDurbin but keeps no incrments
07:13 T0aD yeah but i mean, then you can do whatever incrementing locally
07:13 T0aD on the georeplicated box
07:13 T0aD so you dont have to rsync the whole thing
07:13 EWDurbin and the additional cost a second gluster installation is a bit too much given that we're on donated infr with a cap
07:13 EWDurbin and we don't know how hard that cap is :-x
07:13 T0aD infr with a cap ?
07:14 EWDurbin $$ cap
07:14 T0aD well the only cost is a new gluster install on the backup node i guess ?
07:14 EWDurbin that's a good point
07:14 EWDurbin it doesn't even have to be a replica set i suppose
07:14 EWDurbin that's a good direction T0aD
07:14 T0aD yeah im one of those guys with good point
07:14 T0aD :P
07:15 EWDurbin i'll take a look at rbackup as well
07:17 ndarshan joined #gluster
07:18 EWDurbin thanks again T0aD
07:18 T0aD np
07:19 davinder joined #gluster
07:20 brosner joined #gluster
07:21 jtux joined #gluster
07:37 prasanth joined #gluster
07:40 jiqiren joined #gluster
07:42 ngoswami joined #gluster
07:44 hchiramm_ joined #gluster
07:44 samppah T0aD: 12k sites on glusterfs? static or dynamic sites? :)
07:45 T0aD not yet on gluster D:
07:45 T0aD dynamic
07:45 samppah ach :)
07:45 davinder2 joined #gluster
07:46 T0aD once the power of gluster and VMs is unleashed im aiming at 200k
07:46 T0aD muhahaha
07:46 * T0aD is unstoppable
07:46 ctria joined #gluster
07:47 samppah =)
07:48 blook joined #gluster
07:50 samppah on my todo list is to evaluate glusterfs for shared webhosting environment
07:50 samppah but it afraid of ,,(php)
07:50 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
07:50 glusterbot --fopen-keep-cache
07:52 T0aD samppah, wasnt aware of that
07:54 ekuric joined #gluster
07:56 samppah yeh, if performance is bad then we are probably going to host vm on gluster and export nfs from there
07:59 jtux joined #gluster
08:04 ngoswami joined #gluster
08:07 T0aD damn
08:07 T0aD or maybe remove the self heal
08:07 samppah yeah that could do it too
08:08 T0aD i would rather control it manually
08:08 T0aD than to have it automatically launches at times
08:09 ndarshan joined #gluster
08:09 eseyman joined #gluster
08:10 lalatenduM joined #gluster
08:15 franc joined #gluster
08:19 dneary joined #gluster
08:20 lalatenduM joined #gluster
08:21 keytab joined #gluster
08:22 rjoseph1 joined #gluster
08:23 mick27 joined #gluster
08:29 klaxa|work joined #gluster
08:32 mick271 joined #gluster
08:44 saurabh joined #gluster
08:48 meghanam joined #gluster
08:55 bala1 joined #gluster
08:56 dusmant joined #gluster
08:56 kanagaraj joined #gluster
08:59 ndarshan joined #gluster
09:05 dneary joined #gluster
09:08 aravindavk joined #gluster
09:28 RameshN joined #gluster
09:29 aravindavk joined #gluster
09:30 ndarshan joined #gluster
09:46 SteveCooling uhm... silly question maybe. when I run "gluster volume heal MYVOL info" and i get the actual file path in the list, how do i find the gfid of the file so i can find it in the /.glusterfs/ tree ?
09:50 social kkeithley: or any other dev who knows enough about inode.c ? *ping*
09:53 kdhananjay1 joined #gluster
09:54 samppah SteveCooling: http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/ is this what you are looking for?
09:54 glusterbot Title: What is this new .glusterfs directory in 3.3? (at joejulian.name)
09:54 tdasilva joined #gluster
09:54 SteveCooling samppah: yeah... i was already reading on that. sorry for n00bspam :)
09:55 samppah SteveCooling: no problem :)
09:56 samppah another way to do it is to do ls -i on the actual file and then use find /glusterExport -inum inodenumber
09:57 samppah like this: http://pastie.org/8678103
09:57 glusterbot Title: #8678103 - Pastie (at pastie.org)
10:01 ngoswami joined #gluster
10:09 ngoswami_ joined #gluster
10:15 dusmant joined #gluster
10:15 kanagaraj joined #gluster
10:15 tokik joined #gluster
10:22 mgebbe joined #gluster
10:26 6JTAA3X8Q joined #gluster
10:27 harish joined #gluster
10:27 hagarth social: good catch! (for patch 6850)
10:30 social hagarth: there's another one which is so far eluding me :/ see the linked ticket
10:30 ells joined #gluster
10:31 hagarth social: the one in inode.c?
10:33 social yes
10:33 qdk joined #gluster
10:34 hagarth hmm, those should usually be cleaned up by invalidations from the kernel (unless there is a missing unref)
10:35 social well reproducer is mkdir and rmdir on volume, the leak happends in georeplication mount
10:44 davinder joined #gluster
10:50 bala joined #gluster
10:55 dneary joined #gluster
10:56 hagarth joined #gluster
11:00 hybrid512 joined #gluster
11:04 dusmant joined #gluster
11:04 rjoseph joined #gluster
11:05 rjoseph1 joined #gluster
11:09 edward2 joined #gluster
11:09 ira joined #gluster
11:27 dusmantkp_ joined #gluster
11:32 CheRi joined #gluster
11:34 klaxa|work joined #gluster
11:39 klaxa|work left #gluster
11:41 Slash joined #gluster
11:45 sks joined #gluster
11:51 CheRi joined #gluster
11:57 dusmantkp_ joined #gluster
12:07 leochill joined #gluster
12:10 itisravi joined #gluster
12:20 shubhendu joined #gluster
12:20 diegows joined #gluster
12:22 CheRi joined #gluster
12:22 ppai joined #gluster
12:22 abyss^ I have a lot of that message on one of my brick: http://fpaste.org/72699/09980901/ Any explain of this strange behavior? Log of that brick grow about 1MB per minute:/
12:22 glusterbot Title: #72699 Fedora Project Pastebin (at fpaste.org)
12:32 ngoswami joined #gluster
12:39 ngoswami_ joined #gluster
12:48 mattappe_ joined #gluster
12:57 rjoseph joined #gluster
12:58 glusterbot New news from newglusterbugs: [Bug 1056406] DHT + add brick : Directory self heal is fixing hash layout for few Directories <https://bugzilla.redhat.co​m/show_bug.cgi?id=1056406>
13:00 CheRi joined #gluster
13:06 dneary joined #gluster
13:07 mattappe_ joined #gluster
13:12 B21956 joined #gluster
13:21 getup- joined #gluster
13:35 pk1 joined #gluster
13:35 pk1 left #gluster
13:39 hagarth joined #gluster
13:42 mattappe_ joined #gluster
13:42 Peanut Meh, qemu-img gives "Unkown protocol 'gluster://localhost/gv0/test.img'" when trying to make a libgfapi backed VM :-(
13:43 Peanut I guess my libvirt/qemu haven't been compiled with Gluster support?
13:44 ndevos Peanut: I think you need a pretty new qemu-img for that, mount the volume over fuse and pass the gluster:// url to qemu(-kvm)
13:44 Peanut ndevos: Oh, so I can make my img the 'old' way, and still use libgfapi? That's nice, let me try that.
13:45 ndevos Peanut: yeah, access through libgfapi and fuse can be mixed
13:46 sroy_ joined #gluster
13:46 Peanut Can I just bring down an existing guest and restart it with the backend changed to libgfapi that way?
13:47 SteveCooling erh.. how do I resolve split-brained / on bricks?
13:48 SteveCooling i'm scared to try something smart without some guidance :)
13:49 chirino joined #gluster
13:50 chirino joined #gluster
13:51 ppai joined #gluster
13:55 Peanut SteveCooling: http://www.joejulian.name/blog/fix​ing-split-brain-with-glusterfs-33/
13:55 glusterbot Title: Fixing split-brain with GlusterFS 3.3 (at www.joejulian.name)
13:56 SteveCooling Peanut: that article doesn't mention directories in particular. obviously i cannot remove the / directory for it to self-heal
13:57 SteveCooling JoeJulian has another article which mentions this, but no specific way to resolve it
13:57 SteveCooling as far as I can tel...
13:57 SteveCooling s/tel/tell/
13:57 glusterbot What SteveCooling meant to say was: as far as I can tell...
13:57 Peanut Ah, ok. Sorry, can't really help you there, never had a split brain yet *knocks wood*
13:59 hagarth SteveCooling: this might help - https://github.com/gluster/gluster​fs/blob/master/doc/split-brain.md
13:59 glusterbot Title: glusterfs/doc/split-brain.md at master · gluster/glusterfs · GitHub (at github.com)
14:00 hagarth @learn
14:00 glusterbot hagarth: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
14:00 hagarth @learn split-brain-doc as https://github.com/gluster/gluster​fs/blob/master/doc/split-brain.md
14:00 glusterbot hagarth: The operation succeeded.
14:00 hagarth @split-brain-doc
14:00 glusterbot hagarth: https://github.com/gluster/gluster​fs/blob/master/doc/split-brain.md
14:00 hagarth glusterbot: thumbs up!
14:07 blook joined #gluster
14:08 tdasilva joined #gluster
14:10 khushildep joined #gluster
14:12 hybrid512 joined #gluster
14:13 jmarley joined #gluster
14:20 zapotah joined #gluster
14:20 zapotah joined #gluster
14:26 dneary joined #gluster
14:31 abyss^ I have a lot of that message on one of my brick: http://fpaste.org/72699/09980901/ Any explaination of this strange behavior? Log of that brick grow about 1MB per minute:/
14:31 glusterbot Title: #72699 Fedora Project Pastebin (at fpaste.org)
14:33 mattappe_ joined #gluster
14:34 blook joined #gluster
14:37 theron joined #gluster
14:39 mattappe_ joined #gluster
14:42 theron joined #gluster
14:46 dbruhn joined #gluster
14:48 kanagaraj joined #gluster
14:51 bennyturns joined #gluster
14:57 ujjain joined #gluster
14:57 benjamin__ joined #gluster
14:57 jdarcy joined #gluster
14:59 glusterbot New news from newglusterbugs: [Bug 1059012] Unable to mount gluster volume with usrquota <https://bugzilla.redhat.co​m/show_bug.cgi?id=1059012>
14:59 zapotah joined #gluster
14:59 zapotah joined #gluster
15:00 kkeithley gluster community meeting in #gluster-meeting@freenode
15:11 vpshastry joined #gluster
15:11 bugs_ joined #gluster
15:13 vpshastry1 joined #gluster
15:14 plarsen joined #gluster
15:15 jobewan joined #gluster
15:15 vpshastry joined #gluster
15:20 khushildep joined #gluster
15:20 japuzzo joined #gluster
15:26 social hagarth: I still fail to find the missing inode_unref :(
15:31 zapotah joined #gluster
15:36 kaptk2 joined #gluster
15:40 rwheeler joined #gluster
15:44 theron joined #gluster
15:48 calum_ joined #gluster
16:03 olisch joined #gluster
16:06 lpabon joined #gluster
16:08 benjamin__ joined #gluster
16:13 daMaestro joined #gluster
16:14 dberry joined #gluster
16:14 dberry joined #gluster
16:18 jag3773 joined #gluster
16:20 rotbeard joined #gluster
16:21 sroy_ joined #gluster
16:22 olisch hello
16:22 glusterbot olisch: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:24 zaitcev joined #gluster
16:24 olisch i am running glusterfs 3.4.2 on centos 6 and have a problem with adding new bricks to a volume. the volume currently consists  auf 4 bricks, 2 further bricks should be added
16:25 olisch a peer probe for the two new bricks isn't a problem, but the add-brick runs into an error: gluster volume add-brick vzbackups bvps10-111-5-12.vzppXXXX.XXXX:/vsbackup
16:25 olisch volume add-brick: failed:
16:27 olisch on the brick which should be added, the glusterd log says: 0-management: Stage failed on operation 'Volume Add brick', Status : -1
16:27 rjoseph joined #gluster
16:27 olisch i also enabled the debug log, but i am not able to figure out the reason
16:29 olisch has someone a hint, how to go on debugging this issue?
16:29 semiosis olisch: pastie.org the glusterd log including that message
16:30 semiosis olisch: also check to make sure all servers are online by running 'gluster peer status' on *every* server
16:31 EWDurbin left #gluster
16:31 olisch peer status is fine on every server
16:32 olisch i gonna paste the log, one moment
16:37 olisch here is the glusterd debug log from the brick which should be added: http://pastie.org/8679132
16:37 glusterbot Title: #8679132 - Pastie (at pastie.org)
16:40 dbruhn olisch, do you have SELinux running?
16:41 olisch no, its disabled according to getenforce
16:41 dbruhn also this looks like you aren't able to resolve host names between all of the servers
16:41 dbruhn [2014-01-29 16:24:58.773156] D [glusterd-utils.c:5029:glust​erd_friend_find_by_hostname] 0-management: Unable to find friend: bvps10-111-5-12.vzppXXXX.XXXX
16:41 olisch thats the node itself
16:41 olisch which should be added
16:42 baoboa joined #gluster
16:42 dbruhn which log is this from?
16:43 olisch from the brick that should be added to the volume
16:43 semiosis olisch: try adding a hostname alias to 127.0.0.1 for each server on itself
16:43 semiosis also check that all servers know each other by their correct ,,(hostnames)
16:43 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
16:44 semiosis which would have been evident in the gluster peer status you checked
16:45 olisch1 joined #gluster
16:47 zapotah joined #gluster
16:51 olisch1 hostname alias to 127.0.0.1 added on each server - nothing changed
16:52 vpshastry left #gluster
16:59 daMaestro joined #gluster
16:59 zerick joined #gluster
17:04 sprachgenerator joined #gluster
17:06 mattappe_ joined #gluster
17:08 rjoseph1 joined #gluster
17:11 olisch1 that's curious … i don't get it, everything looks still fine to me, except the error. if someone has any further hints, just drop a line. i gonna be offline soon, but i will have a look at the look later
17:12 olisch1 thx for your support semiosis
17:12 semiosis yw sorry couldn't help more but real busy
17:12 dbruhn olisch1, are you sure your versions are all matching?
17:12 olisch1 yes, all versions the same
17:13 olisch1 all running 3.4.2
17:14 dbruhn what does your "gluster volume status" produce on the existing servers>
17:15 olisch1 There are no active volume tasks
17:15 olisch1 or do you want it more detailed?
17:17 dbruhn There should be a table that gets spit out showing the status of each brick that makes up the volume
17:17 leochill joined #gluster
17:17 mattappe_ joined #gluster
17:18 olisch1 yes. there every brick status is fine
17:18 mattapperson joined #gluster
17:20 dbruhn ok the log you posted earlier, which log file is it from.
17:20 KyleG joined #gluster
17:20 KyleG joined #gluster
17:20 dbruhn There are several logs on each server
17:20 olisch1 its the etc-glusterfs-glusterd.vol.log from the brick which should be added to the volume
17:20 dbruhn does anything exist in the cli log?
17:21 keytab joined #gluster
17:21 olisch1 no, not when trying to add the brick
17:22 dbruhn is there anything meaningful in the glusterhsd.log or the mnt log?
17:24 olisch1 glusterhsd.log? never seen such logfile
17:24 dbruhn on any of your servers>
17:24 dbruhn ?
17:24 olisch1 yes on any
17:25 zapotah joined #gluster
17:25 olisch1 or was the hsd a typo of yours?
17:26 zapotah joined #gluster
17:26 olisch1 and you meant glusterfsd?
17:27 dbruhn what is the output of "ls /var/log/glusterfs"
17:28 olisch1 bricks  cli.log  etc-glusterfs-glusterd.vol.log  etc-glusterfs-glusterd.vol.log-20140119  etc-glusterfs-glusterd.vol.log-20140126  nfs.log
17:29 sks joined #gluster
17:29 dbruhn ok, any errors in etc-glusterfs-glusterd.vol.log
17:30 olisch1 yes … every 3 seconds "0-management: connection attempt failed (Connection refused)" due to the fact that nfs is disabled
17:30 olisch1 seems to be a known bug which should have been fixed with 3.4.1, but isn't
17:31 olisch1 so i stripped it from my logfile
17:31 olisch1 nevertheless when enabling nfs for the volume, those messages are gone, but the problem remains ;)
17:32 olisch1 but no more error except the one i pasted
17:40 dbruhn have you tried running the command more than once?
17:40 olisch1 yes
17:43 olisch1 i've got to go now
17:43 olisch1 thx for your help dbruhn
17:44 thefiguras joined #gluster
17:52 failshell joined #gluster
17:52 Mo__ joined #gluster
17:53 davinder joined #gluster
18:01 hybrid512 joined #gluster
18:01 theron joined #gluster
18:19 kaptk2 joined #gluster
18:30 zapotah joined #gluster
18:32 diegows joined #gluster
18:33 tru_tru joined #gluster
18:39 semiosis debs of glusterfs 3.5.0beta2 for debian & ubuntu published last night
18:42 thefiguras joined #gluster
18:44 kaptk2 joined #gluster
18:48 sroy_ joined #gluster
18:54 Jayunit100 joined #gluster
18:55 mattappe_ joined #gluster
18:56 Jayunit100 ping johnmark ~ i want to setup a hack day for glusterfs-hadoop ci on ec2 ~   figure some others might want to get involved who are interested in glusterfs-puppet / EC2 deployment / glusterfs-hadoop ……..  also, it will force me to finally setup a proper upstream release process.
18:59 rjoseph joined #gluster
19:00 glusterbot New news from newglusterbugs: [Bug 1057292] option rpc-auth-allow-insecure should default to "on" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1057292> || [Bug 1057295] glusterfs doesn't include firewalld rules <https://bugzilla.redhat.co​m/show_bug.cgi?id=1057295>
19:02 daMaestro joined #gluster
19:12 ndk joined #gluster
19:13 gork4life joined #gluster
19:14 gork4life How do you configure failover with glusterfs
19:14 dbruhn gork4life, in what context?
19:14 dbruhn are you talking from the client side?
19:15 thefiguras joined #gluster
19:16 gork4life dbruhn: Well I think that I could explain better
19:18 gork4life dbruhn: I have to two servers replicating data, I've mount to one server for storage via vsphere. The problem is I want to have it stay connected even if I pull the plug on one of the servers
19:18 dbruhn using ifs?
19:18 dbruhn nfs
19:18 gork4life yes
19:19 gork4life dbruhn: can heartbeat and pacemaker do this or is that just for vm
19:19 KyleG left #gluster
19:20 dbruhn gork4life, I am honestly not sure. I see a lot of guys talking about using rrdns, but that doesn't seem like it would resolve the issue for you
19:20 semiosis the usual advice is to use a vip for nfs ha
19:21 semiosis fuse clients do ha automatically of course
19:21 thefiguras joined #gluster
19:21 gork4life dbruhn: or could I create a virtual ip to point to both servers so if one fails the other server
19:21 dbruhn This article might help you, http://download.gluster.org/pub/glust​er/glusterfs/doc/HA%20and%20Load%20Ba​lancing%20for%20NFS%20and%20SMB.pdf
19:22 gork4life dbruhn: Ok I'll check it out and post back in a few minutes thanks
19:24 mattapperson joined #gluster
19:26 iksik_ joined #gluster
19:26 diegows joined #gluster
19:29 MacWinner joined #gluster
19:29 SFLimey joined #gluster
19:39 leochill joined #gluster
19:42 failshel_ joined #gluster
19:42 mattapperson joined #gluster
19:46 Elico joined #gluster
19:47 Elico How would I run glusterfs on CentOS 6.5?
19:47 kkeithley ,,(yumrepos)
19:47 glusterbot I do not know about 'yumrepos', but I do know about these similar topics: 'yum repo'
19:47 kkeithley ,,(yum repo)
19:47 glusterbot The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
19:49 Elico So it's not required to install any new kernel modules?
19:50 kkeithley no. never has been in the three years I've been involved with glusterfs
19:51 Elico kkeithley: thanks!! I wasn't sure if the fuse FS combination of glusterfs required it.
19:51 Technicool joined #gluster
20:01 mattapperson joined #gluster
20:02 bmrtin joined #gluster
20:04 T0aD ,,(T0aD)
20:04 glusterbot I do not know about 'T0aD', but I do know about these similar topics: 'THP'
20:04 T0aD ,,(quota)
20:04 glusterbot I do not know about 'quota', but I do know about these similar topics: 'quorum'
20:16 semiosis johnmark: movement on that Ubuntu MIR -- https://bugs.launchpad.net/ubunt​u/+source/glusterfs/+bug/1274247
20:16 glusterbot Title: Bug #1274247 “[MIR] Glusterfs” : Bugs : “glusterfs” package : Ubuntu (at bugs.launchpad.net)
20:18 kaptk2 joined #gluster
20:27 JoeJulian @ports
20:27 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:31 semiosis purpleidea: ping
20:35 jbrooks left #gluster
20:38 purpleidea semiosis: pong
20:38 semiosis hey buddy
20:38 purpleidea hey
20:38 semiosis i found this today... http://nicoulaj.github.io/vagrant-maven-plugin/
20:38 glusterbot Title: Vagrant Maven plugin - (at nicoulaj.github.io)
20:39 semiosis and can't wait to hook it up with your vagrant/puppet/gluster stuff
20:39 semiosis envisioning having maven launch gluster in vagrant & run the java test suite for my little side project... all with one command, mvn test.
20:39 purpleidea semiosis: cool... i'm just heading out to the train, but i'll have a look at it when i'm on board...
20:40 semiosis ok
20:40 * purpleidea afk and hoping for good train wifi
20:53 rwheeler joined #gluster
20:53 failshell joined #gluster
20:58 Gluster joined #gluster
21:10 mattappe_ joined #gluster
21:12 sarkis joined #gluster
21:16 kkeithley JoeJulian: +1
21:17 JoeJulian Heh, thanks.
21:18 SFLimey joined #gluster
21:18 JoeJulian kkeithley: That second one was rather hostile to be coming from inside redhat, imho.
21:18 T0aD JoeJulian: -2
21:18 T0aD now you're -1 !
21:18 JoeJulian @kick T0aD
21:18 glusterbot JoeJulian: Error: You don't have the #gluster,op capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
21:18 T0aD ha !
21:19 kkeithley second one?
21:19 JoeJulian rpc-auth-allow-insecure... "it's not 1980"
21:20 kkeithley ah, yes, that one.
21:23 kkeithley CLOSE/NOTABUG probably for that
21:42 theron_ joined #gluster
21:46 zapotah joined #gluster
21:46 zapotah joined #gluster
21:53 rotbeard joined #gluster
21:56 jbrooks joined #gluster
21:56 SFLimey joined #gluster
22:07 fidevo joined #gluster
22:07 japuzzo_ joined #gluster
22:11 purpleidea semiosis: back and on sketch internet, so if i don't respond for a while, apologies...
22:12 purpleidea s/sketch/sketchy/
22:12 glusterbot What purpleidea meant to say was: semiosis: back and on sketchy internet, so if i don't respond for a while, apologies...
22:12 semiosis hey sure no rush
22:12 semiosis i'll be around most of the day & night probably
22:13 purpleidea semiosis: so i looked at the maven thing, but i might need a bit of background, and how i can help
22:14 semiosis maven is hard to explain, hard to learn, and hard to understand -- but once you get past all that, it's hard to live without
22:14 semiosis software comprehension & project management
22:14 purpleidea :P how about what do you need patched where?
22:14 semiosis no idea yet
22:14 purpleidea did you see the patches i landed for you a few days ago?
22:14 semiosis istr you saying something about that but i didnt see the patches
22:15 semiosis will look tonight
22:15 purpleidea (i think i might have sent out a ml mail)
22:15 semiosis going to give the vagrant stuff a spin
22:15 semiosis will be a few hrs before i get to it though
22:15 purpleidea semiosis: http://gluster.org/pipermail/glust​er-users/2014-January/038794.html
22:15 glusterbot Title: [Gluster-users] Gluster Volume Properties (at gluster.org)
22:16 semiosis great stuff, thanks!
22:16 purpleidea yw
22:17 purpleidea so see how that goes, and when you figure out what needs hacking on, ping me back
22:17 purpleidea i made some screencasts too
22:17 semiosis will do
22:18 purpleidea ,,(next)
22:18 glusterbot Another satisfied customer... NEXT!
22:18 purpleidea :P
22:18 semiosis ,,(thanks)
22:18 glusterbot you're welcome
22:20 social heh I create volume, mount it with valgrind --tool=memcheck --leak-check=yes /usr/local/sbin/glusterfs --volfile-id=potwora --volfile-server=potwora /media and run mkdir /media/kachna, umount /media
22:20 social nothing
22:20 social but when I do the same but I run mkdir /media/{kachna,kacica} bam it leaks
22:23 semiosis purpleidea: to make things even more difficult for myself, going to try using virtualbox instead of libvirt! :O
22:24 purpleidea semiosis: that's okay! let me know of what you learn. it was probably more work for _me_ to use libvirt :P
22:24 purpleidea now it's more work for you because it's not that :P sorry
22:24 semiosis i saw some performance stats recently suggesting virtualbox was significantly slower than kvm, but it's just so easy to use
22:26 purpleidea interesting... tbh, when i started, i didn't realize virtualbox was opensource... i thought it was proprietary :P however, i'm glad i used libvirt because i know those tools, and they have nice features that you can benefit from (eg: virt-manager, virsh, etc...)
22:27 purpleidea also, when moving from a test deploy to realy vm's, which will also use libvirt, i know to expect similar performance, etc...
22:27 purpleidea s/realy/real production/
22:27 glusterbot What purpleidea meant to say was: also, when moving from a test deploy to real production vm's, which will also use libvirt, i know to expect similar performance, etc...
22:31 glusterbot New news from newglusterbugs: [Bug 1045309] "volfile-max-fetch-attempts" was not deprecated correctl.. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1045309>
22:35 andreask joined #gluster
22:38 theron joined #gluster
22:39 ira joined #gluster
22:39 andreask1 joined #gluster
22:43 sprachgenerator_ joined #gluster
22:46 sarkis joined #gluster
22:49 sprachgenerator joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary