Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 joelwallis joined #gluster
00:31 bradfirj joined #gluster
00:37 _pol joined #gluster
00:37 emarks joined #gluster
00:47 yinyin joined #gluster
00:50 badone joined #gluster
00:55 kedmison joined #gluster
00:59 FinnTux how about debian repository and 3.4? will it be updated anytime soon?
01:03 bala joined #gluster
01:05 harish joined #gluster
01:06 failshell joined #gluster
01:18 failshel_ joined #gluster
01:36 _pol joined #gluster
01:42 raghug joined #gluster
01:43 kevein joined #gluster
01:48 sprachgenerator joined #gluster
01:49 harish joined #gluster
02:01 yinyin joined #gluster
02:08 rcoup joined #gluster
02:11 fcami joined #gluster
02:12 raghug joined #gluster
02:27 glusterbot New news from newglusterbugs: [Bug 958781] KVM guest I/O errors with xfs backed gluster volumes <http://goo.gl/4Goa9>
02:28 jebba joined #gluster
02:31 nueces joined #gluster
02:49 kshlm joined #gluster
02:52 vshankar joined #gluster
02:57 bulde joined #gluster
03:24 bulde joined #gluster
03:36 bharata joined #gluster
03:40 hagarth joined #gluster
03:43 yinyin joined #gluster
03:48 raghug joined #gluster
04:01 puebele1 joined #gluster
04:14 yinyin joined #gluster
04:20 sgowda joined #gluster
04:33 hagarth joined #gluster
04:44 satheesh joined #gluster
04:54 lalatenduM joined #gluster
04:56 mohankumar joined #gluster
04:59 nueces joined #gluster
05:03 yinyin joined #gluster
05:03 vpshastry joined #gluster
05:05 vpshastry left #gluster
05:09 raghu joined #gluster
05:13 rjoseph joined #gluster
05:14 jim__ joined #gluster
05:14 bala joined #gluster
05:19 jim__ Hi folks. Is it possible to set up a cluster with multiple networks so that all of the replication happens on one net and all the client access happens on the other? E.g. ETH0 - 192.168.0.x/24 for client access, ETH1 - 192.168.1.x/24 for Gluster replication chatter.
05:20 jim__ I set up a 2 node cluster to test, and I used the 192.168.1.x addresses to set up the gluster peers. But when I try to "mount -t glusterfs 192.168.0.100/test /mnt", it fails, because the Gluster server only knows about the ETH1 addresses and the client cannot talk directly to that net.
05:25 jim__ I should point out that the "mount" command on either one of the actual gluster servers works, because as a client, mount.glusterfs knows how to reach the "192.168.1" net. But, when the client is on a separate machine on the 192.168.0 net, it is being told to contact the bricks located on the 192.168.1 net, and it fails.
05:29 CheRi joined #gluster
05:30 samppah jim__: clients talk directly to servers and writing happens synchronously, ie. client writes file to all servers at same tiem
05:40 jim__ samppah, so you're saying that there is no real benefit to using multiple networks to improve performance
05:41 samppah jim__: it would be good to have separate network for storage traffic only
05:42 samppah you can also use NFS to access gluster volumes, in that case it would client -> glusterserver1 -> glusterserverX...Y
05:43 shylesh joined #gluster
05:44 jim__ samppah, you're saying that a gluster native client writing a file to a 2 node cluster has to independently write to server A and B and wait for both to complete
05:44 samppah jim__: yes
05:45 jim__ i was under the mis-impression that it would write to just 1 server and that would handle syncing separately
05:45 jim__ what about for read?
05:46 bulde joined #gluster
05:49 samppah jim__: it reads from one servers.. however it checks file coherency from all bricks that are hosting the file
05:49 samppah sorry for poor english.. i'm feeling bit sleepy this morning :)
05:49 jim__ samppah, you've been a big help! the english is just fine!
05:51 jim__ so, i think i need to reconfigure my cluster - i.e. to use the "other" network interface.
05:51 satheesh joined #gluster
05:52 jim__ would it work to say "gluster peer detach serverA-eth1", and then "gluster peer probe serverA-eth0" ?
05:52 jim__ and then same for ServerB?
05:52 samppah do you have existing volumes?
05:52 jim__ yes
05:53 shireesh joined #gluster
05:53 jim__ and there is data already on the bricks
05:54 rcoup joined #gluster
05:58 jim__ samppah, basically i just need to change the IP address associated with each brick in the cluster.
05:59 samppah jim__: did you use ip or hostname to create volumes and probe servers?
05:59 jim__ IP
06:01 samppah hmm.. i'm bit unsure but i think you need to recreate volumes if you want to change ip
06:01 samppah or use replace-brick
06:02 jim__ ok, i'll play around with it. thanks
06:03 hagarth joined #gluster
06:08 psharma joined #gluster
06:25 jtux joined #gluster
06:26 yinyin joined #gluster
06:28 saurabh joined #gluster
06:32 ricky-ticky joined #gluster
06:32 Recruiter joined #gluster
06:33 rgustafs joined #gluster
06:35 rastar joined #gluster
06:43 92AAAD10C joined #gluster
06:45 hagarth joined #gluster
06:47 kevein joined #gluster
06:58 glusterbot New news from newglusterbugs: [Bug 952029] Allow an auxiliary mount which lets users access files using only gfids <http://goo.gl/x5z1R>
07:00 ctria joined #gluster
07:07 piotrektt joined #gluster
07:08 jtux joined #gluster
07:10 ramkrsna joined #gluster
07:10 ramkrsna joined #gluster
07:14 hybrid512 joined #gluster
07:15 ngoswami joined #gluster
07:15 mooperd joined #gluster
07:15 dobber joined #gluster
07:18 andreask joined #gluster
07:20 piotrektt joined #gluster
07:24 dobber joined #gluster
07:28 shireesh joined #gluster
07:37 pkoro joined #gluster
07:43 abyss^ joined #gluster
07:50 tjikkun_work joined #gluster
07:55 mooperd joined #gluster
08:01 dobber__ joined #gluster
08:02 tru_tru joined #gluster
08:07 ccha where can I find changelog about 3.3.2 ?
08:18 satheesh joined #gluster
08:20 ultrabizweb joined #gluster
08:37 atrius joined #gluster
08:40 vpshastry joined #gluster
08:43 ultrabizweb joined #gluster
08:49 vimal joined #gluster
08:54 harish joined #gluster
08:59 satheesh joined #gluster
09:09 ngoswami joined #gluster
09:10 tru_tru joined #gluster
09:17 spider_fingers joined #gluster
09:19 T0aD hi guys, is there no way to have gluster automatically sync existing files into a new brick added in a replica scenario ?
09:19 T0aD i tried heal, heal full, didnt work
09:19 bradfirj Balance command maybe?
09:19 T0aD like rebalance ?
09:19 bradfirj yeah
09:19 T0aD i thought i tried, ill give it another shot
09:19 bradfirj disclaimer: I have no idea what I'm doing don't do anything on a prod host
09:20 bradfirj :)
09:20 T0aD yeah dont worry
09:20 T0aD im just testing things out on a couple of VMs
09:22 NeatBasis_ joined #gluster
09:23 T0aD volume rebalance: users: failed: Volume users is not a distribute volume or contains only 1 brick.
09:23 T0aD Not performing rebalance
09:23 T0aD not sure it works with replication
09:24 bradfirj hmm
09:24 bradfirj to the manaul
09:24 bradfirj manual*
09:24 T0aD yeah im reading it
09:27 bradfirj T0aD: silly question, what did you create the volume with its replica setting as?
09:27 T0aD create with no setting, its set as a distribute
09:28 T0aD then add-brick and make it a replica 2
09:28 bradfirj ah
09:28 T0aD the software seems to understand that fine
09:30 T0aD i can redo all the process from the start if you wish
09:32 bradfirj My understanding was you had to specify the replica sets at volume creation, then you add bricks in multiples of n
09:32 bradfirj But I might be wrong
09:32 T0aD i will give it a shot
09:32 tg2 joined #gluster
09:33 bradfirj so create volname replica n transport tcp brick brick brick brick
09:33 bradfirj where n is 2 or 4
09:33 bradfirj 4 would be a 4 node "RAID 1"
09:33 T0aD well id like to try just 1 brick then adding one
09:33 bradfirj 2 would be a 4 node "RAID 10"
09:33 bradfirj ah
09:34 T0aD anyway i dont think thats a problem in that case
09:34 T0aD and it shouldnt be
09:34 T0aD 1 brick alone cannot be stripped, distributed, nor replicated
09:34 T0aD and the software seems to respond fine to that
09:35 bradfirj http://www.gluster.org/pipermail/glu​ster-users/2011-October/031889.html
09:35 glusterbot <http://goo.gl/Y1m6J> (at www.gluster.org)
09:35 T0aD root@gluster2:/home/toad# attr -l /home/users/
09:35 T0aD Attribute "afr.users-client-0" has a 12 byte value for /home/users/
09:35 T0aD Attribute "afr.users-client-1" has a 12 byte value for /home/users/
09:35 T0aD maybe thats part of the issue
09:35 bradfirj Not sure how accurate that ML posting is
09:35 bradfirj Considering the age, and the linked bug report 404s
09:36 mohankumar joined #gluster
09:37 T0aD alright lets do it again
09:37 mooperd joined #gluster
09:38 bradfirj According to the interwebs, you can't change the replica factor of an existing volume
09:38 bradfirj But again, may be outdated information
09:38 yinyin joined #gluster
09:42 bradfirj T0aD: what version of gluster?
09:43 bradfirj Supposedly, expanding a replica set and changing the replication factor is supported from 3.3 onwards
09:43 T0aD 3.4.0
09:44 ndevos T0aD: you want to add a brick and change the 1-brick volume to a replicated one?
09:44 bradfirj More or less he wants to go from replica nothing to replica 2
09:44 bradfirj Number of bricks is irrelevant
09:45 T0aD oh its working perfectly fine
09:45 T0aD OHLALA
09:45 bradfirj :3
09:45 T0aD the issue was i didnt remove all the extended attributes:
09:45 T0aD -- /home/users (trusted.afr.users-client-0)
09:45 T0aD -- /home/users (trusted.afr.users-client-1)
09:45 ndevos well, "gluster volume add-brick replica 2 server:/path/to/new/brick" would do that
09:45 bradfirj That isn't in the manual :/
09:45 bradfirj but thank you
09:46 T0aD followed by a gluster volume heal users full as said in a bug report somewhere
09:46 ndevos there should be no need to remove the xattrs, but if trusted.afr.* was set, afr (replicate) was already used?
09:46 bradfirj Ah hang on, I've literally read straight past the line where it is
09:47 T0aD ndevos, lets say 'tried'
09:47 T0aD and yes there is a need to remove extended attributes when you delete / recreate volumes
09:48 ndevos yes, that is correct, and glusterbot knows about it too if you paste the error message in here
09:49 T0aD http://www.bpaste.net/show/d71Oxy8ots8RuILVQ6ge/
09:49 glusterbot <http://goo.gl/28kFP> (at www.bpaste.net)
09:49 T0aD ndevos, its ok i made a sexy script yesterday
09:49 T0aD show him glusterbot
09:49 T0aD https://gist.github.com/T0aD/6004343
09:49 glusterbot Title: Sexy script to remove GlusterFS extended attributes (at gist.github.com)
09:51 ndevos T0aD: I think such a script should have been installed if you used the rpms, in the source it's somewhere under extras/
09:51 bradfirj I have to say, the performance might not be in the same ballpark, but Gluster is substantially easier to set up than Lustre :P
09:51 T0aD extras/clear_xattrs.sh
09:51 vshankar joined #gluster
09:51 T0aD i just read it, its incomplete
09:51 vpshastry1 joined #gluster
09:51 T0aD yeah lustre is a pain
09:51 T0aD 1 week to set it up :D
09:51 bradfirj All my stuff is Debian too
09:52 bradfirj Whereas lustre kernel patches are available for rhel or sles and screw anyone else :P
09:52 deepakcs joined #gluster
09:53 ndevos oh, please file a bug if that script is incomplete
09:53 glusterbot http://goo.gl/UUuCq
09:55 T0aD well its not removing trusted.afr :P
09:56 T0aD now lets wonder why subdirectories are not replicated.
09:58 bradfirj I had some behaviour I wasn't sure about yesterday, where subdirectories created on a server node in a brick are completely ignored
09:58 T0aD here heal-failed are reporting them
09:58 bradfirj and only if you create them by mounting the glusterfs somewhere then working on the mount would it work
09:59 bradfirj That may be intended behaviour though
10:00 T0aD ah funny
10:00 T0aD if i mount the directory
10:01 T0aD its working, subdirectories are replicated
10:01 bradfirj yeah, that
10:02 bradfirj I don't know enough about the internals to understand what's going on, just that working directly on a brick doesn't seem to play nice
10:02 kkeithley1 joined #gluster
10:05 T0aD yeah i dont like that either
10:06 T0aD but its probably a performance choice
10:06 vshankar joined #gluster
10:06 T0aD like not sync every file right away
10:06 T0aD but more on a use basis
10:07 T0aD but you should be aware of that so not to backup the wrong brick
10:13 ngoswami joined #gluster
10:23 sgowda joined #gluster
10:25 vpshastry1 joined #gluster
10:31 jiku joined #gluster
10:33 edward1 joined #gluster
10:35 duerF joined #gluster
10:38 vshankar joined #gluster
10:59 _ilbot joined #gluster
10:59 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
11:01 rcheleguini joined #gluster
11:04 failshell joined #gluster
11:11 dobber_ joined #gluster
11:13 ccha what is the number for .glusterfs/indices/xattrop/xattrop-b​4e74567-aa06-41c5-9ed9-ddebb720233e ?
11:16 CheRi joined #gluster
11:23 andreask joined #gluster
11:28 ngoswami joined #gluster
11:50 CheRi joined #gluster
11:57 sgowda joined #gluster
11:57 hagarth joined #gluster
11:59 lalatenduM joined #gluster
12:00 sac`away joined #gluster
12:00 yinyin joined #gluster
12:02 rcheleguini joined #gluster
12:41 mgebbe_ joined #gluster
12:53 deepakcs joined #gluster
12:58 rastar joined #gluster
13:00 mohankumar joined #gluster
13:05 bennyturns joined #gluster
13:17 ultrabizweb joined #gluster
13:20 pkoro joined #gluster
13:29 jdarcy joined #gluster
13:32 sgowda joined #gluster
13:41 chirino joined #gluster
13:44 vpshastry joined #gluster
13:51 ultrabizweb joined #gluster
13:53 hagarth joined #gluster
13:54 tziOm joined #gluster
13:59 clag_ joined #gluster
13:59 raghug joined #gluster
14:03 ultrabizweb joined #gluster
14:03 aliguori joined #gluster
14:07 __Bryan__ joined #gluster
14:12 bugs_ joined #gluster
14:13 premera joined #gluster
14:15 JoeJulian T0aD, bradfirj That is expected behavior. bricks are storage for the GlusterFS filesystem, not for any other use. You access a volume through a client.
14:16 bradfirj As I suspected, it is slightly unintuitive though at first
14:16 T0aD never!
14:16 T0aD hi JoeJulian , nice blog by the way
14:16 JoeJulian Thanks
14:16 T0aD very cool documentation around gluster
14:16 T0aD i name you my official gluster advisor from this day.
14:16 JoeJulian :)
14:17 dewey joined #gluster
14:17 kkeithley_ JoeJulian: was there any further info about the glusterd.service/glusterfsd.service thing yesterday?
14:19 JoeJulian ccha: https://github.com/gluster​/glusterfs/commits/v3.3.2 is the commit log
14:19 glusterbot <http://goo.gl/Y49eN> (at github.com)
14:20 JoeJulian kkeithley_: yeah. :/ Looks like the netfs mount and glusterd run simultaneously.
14:22 failshel_ joined #gluster
14:22 T0aD stupid question but.. there is no way to stop all gluster daemons at once ?
14:22 semiosis halt
14:23 semiosis ;)
14:23 T0aD *ding* *ding* *ding* we have a WIIIINNNER
14:23 andreask hmm ... with upstart?
14:23 kkeithley_ Big Red Switch
14:23 T0aD /etc/init.d/gluster stop doesnt do its job this lazy punk
14:23 semiosis that's not its job
14:24 semiosis its job is to stop glusterd
14:24 T0aD i was waiting for that one
14:24 T0aD why would you want to stop glusterd without stopping the other processes ?
14:25 semiosis well imho you wouldn't really want to stop it, but you may want to restart it, without interrupting active clients
14:25 T0aD oh im just trying some scenarios here
14:25 semiosis you can kill processes to quit them
14:26 T0aD yeah but my question was that: why there is no way to do that in the package ?
14:26 semiosis because no one has contributed such a patch
14:26 JoeJulian kkeithley_: They need an "AfterIfEnabled=" in systemd....
14:26 T0aD sounds like a job for me.
14:26 semiosis ...or such a patch was rejected, tbh i dont know the history
14:26 JoeJulian T0aD: If you ran rpms there would already be a way.
14:27 T0aD JoeJulian, im using ubuntu
14:27 JoeJulian my point exactly
14:27 T0aD well and i dont install that kind of stuff through packages anyway
14:27 T0aD use the source luke
14:27 kkeithley_ I've got a bad feeling about this.
14:27 JoeJulian There's a glusterfsd init script. Use that to stop
14:27 ccha semiosis: will you make packages 3.3.2 for lucid like for 3.3.1 ?
14:28 failshel_ the 90s called, they want their source installs back ;p
14:28 semiosis ccha: i'd rather not
14:28 JoeJulian !
14:28 ccha why not ?
14:28 T0aD failshell, haha , well its best to be in control with critical daemons i think
14:28 semiosis ccha: lucid is ooooold
14:28 vpshastry joined #gluster
14:28 T0aD you never know when you want to patch them
14:28 failshell T0aD: i dunno about that. your QA is probably not as extensive as RedHat's
14:29 JoeJulian Meh, I might give that to T0aD. ;)
14:29 T0aD yeah sure, and redhat is using all my private patches too
14:29 lpabon joined #gluster
14:29 JoeJulian private patches???
14:29 T0aD after im always free to package it myself once im done.. which i do
14:30 samppah @splitbrain
14:30 glusterbot samppah: I do not know about 'splitbrain', but I do know about these similar topics: 'split-brain'
14:30 samppah @split-brain
14:30 glusterbot samppah: To heal split-brain in 3.3, see http://goo.gl/FPFUX .
14:30 JoeJulian @alias split-brain splitbrain
14:30 glusterbot JoeJulian: The operation succeeded.
14:30 T0aD JoeJulian, patches to core-utils, php, apache, mod_fcgi, suexec, my own module to quote a few
14:30 JoeJulian And these are not submitted upstream?
14:30 T0aD hell i even remember i patched gluster when it was doing port source auth to use it over stunnel
14:31 T0aD @attributes
14:31 glusterbot T0aD: I do not know about 'attributes', but I do know about these similar topics: 'extended attributes', 'file attributes', 'get the file attributes'
14:31 T0aD @remove attributes
14:31 glusterbot T0aD: Error: The command "remove" is available in the Alias, Herald, Later, MessageParser, RSS, and Topic plugins. Please specify the plugin whose command you wish to call by using its name as a command before "remove".
14:32 T0aD get the file attributes
14:32 T0aD glusterbot, you punk
14:33 ccha what is the meaning of the number in .glusterfs/indices/xattrop/xattrop-b​4e74567-aa06-41c5-9ed9-ddebb720233e ?
14:34 T0aD @remove-attributes
14:36 JoeJulian ccha: Are you referring to the uuid? That (probably) corresponds to the gfid. I'm not sure what the indicies/xattrop path is used for.
14:36 kkeithley_ JoeJulian: would a Before=netfs.something in the glusterd.service [Unit] do the job?
14:37 T0aD @learn remove-attributes as https://gist.github.com/T0aD/6004343
14:37 glusterbot T0aD: The operation succeeded.
14:37 JoeJulian It would, if netfs.something existed. It appears to be part of systemd now?
14:43 spider_fingers left #gluster
14:43 ndevos netfs might have been renames to remote-fs with systemd?
14:44 tqrst JoeJulian: did you mean 3.3.2 in your mailing list reply about rdma, or all of 3.3? I remember hearing about rdma working fine in 3.3...
14:46 neofob joined #gluster
14:48 kkeithley_ JoeJulian, ndevos: I'm trying to parse https://bugzilla.redhat.com​/show_bug.cgi?id=787314#c9.  I don't think remote-fs serves the same purpose as netfs. See if you don't agree
14:48 glusterbot <http://goo.gl/n2Rw4> (at bugzilla.redhat.com)
14:48 glusterbot Bug 787314: unspecified, unspecified, ---, dcbw, MODIFIED , assorted issues with remote filesystem dependency tree
14:51 jag3773 joined #gluster
14:51 X3NQ joined #gluster
14:53 T0aD JoeJulian, im reading some of your articles, quite interesting, but am i missing something here ? i went on http://cotdp.com/2011/07/nginix​-on-a-256mb-vm-slice-24000-tps/ (linked on your blog) and see no content
14:53 glusterbot <http://goo.gl/gNQur> (at cotdp.com)
14:57 ramkrsna joined #gluster
14:57 clag_ left #gluster
14:58 ndevos kkeithley_: oh, interesting bz, I think I need to read it an other time...
14:59 kkeithley_ ;-)
15:00 JoeJulian kkeithley_: Ok, it looks like _netdev mounts are configured to mount after network-online.target. Do you think starting glusterd between network.target and network-online.target would work?
15:01 JoeJulian T0aD: Enable javascript
15:01 T0aD how dare you
15:01 kkeithley_ JoeJulian: that's a though
15:01 kkeithley_ thought
15:01 T0aD probably my privoxy
15:02 ccha I have alot these messages :
15:02 ccha [2013-07-16 13:34:21.088142] E [posix.c:224:posix_stat] 0-VOL_REPL1-posix: lstat on /opt/data/b1/.glusterfs/85/31/853​1da33-f441-49f4-8f4e-0b1c9c5c3ff7 failed: No such file or directory
15:02 ccha [2013-07-16 13:34:21.088187] I [server3_1-fops.c:1085:server_unlink_cbk] 0-OIH-PRODUCTS_DATA-SOPHIA-server: 12: UNLINK <gfid:7178a446-7a17-4864-a398-9702636ada​e5>/e1e9cdd2-d015-457e-8dd8-40012036669e (8531da33-f441-49f4-8f4e-0b1c9c5c3ff7) ==> -1 (No such file or directory)
15:13 puebele1 joined #gluster
15:16 zaitcev joined #gluster
15:18 daMaestro joined #gluster
15:19 _pol joined #gluster
15:29 _pol joined #gluster
15:29 T0aD lets have some fun: lets create 10,000 directories and set a quota on each of them
15:33 JoeJulian kkeithley_: Nope, Before=network-online.target didn't work. :(
15:40 T0aD its taking ages.
15:45 bradfirj T0aD: JoeJulian That link (http://cotdp.com/2011/07/nginix-​on-a-256mb-vm-slice-24000-tps/) gives me a Javascript Syntax error :(
15:45 glusterbot <http://goo.gl/gNQur> (at cotdp.com)
15:45 T0aD yeah i wanna fill a complain on JoeJulian's blog
15:47 sprachgenerator joined #gluster
15:48 vpshastry joined #gluster
15:48 T0aD it seems i have to dig in to see how to improve that process.
15:48 * T0aD takes a deep breath *ploof*
15:57 JoeJulian I've pinged cotdp about it.
15:58 JoeJulian He's gmt, though, so probably asleep.
15:58 JoeJulian Er, no...
15:59 * JoeJulian shrugs
15:59 * T0aD is gmt and not asleep (and is very offended)
16:00 kkeithley_ JoeJulian: okay, that's a drag
16:02 kkeithley_ Lennart Poettering thinks it should have worked: http://paste.fedoraproject.org/25727/37399056
16:02 glusterbot Title: #25727 Fedora Project Pastebin (at paste.fedoraproject.org)
16:03 JoeJulian huh... http://www.freedesktop.org/softwa​re/systemd/man/systemd.mount.html
16:03 glusterbot <http://goo.gl/6nqER> (at www.freedesktop.org)
16:03 JoeJulian Maybe we should just encourage people who need to mount their volumes on the servers to do it this way. Then they could do the After=glusterd.service
16:04 nagis joined #gluster
16:04 nagis exit
16:05 sjoeboo so...upgraded from 3.4beta4 to the 3.4 release, following http://vbellur.wordpress.com/2012/​05/31/upgrading-to-glusterfs-3-3/
16:05 glusterbot <http://goo.gl/qOiO7> (at vbellur.wordpress.com)
16:06 sjoeboo wait, no, this
16:06 sjoeboo http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
16:06 glusterbot <http://goo.gl/SXX7P> (at vbellur.wordpress.com)
16:06 sjoeboo there.
16:06 sjoeboo anyways, pretty normal stuff, stop glusterd, upgrade, start it back up
16:06 sjoeboo peers all see each other, and volume info looks good
16:07 sjoeboo but, starting the volume on host01 ONLY brings up teh bricks on that node, not the other.s
16:21 JoeJulian ccha: not sure what those are. Something's attempting to be deleted that's not there. Anything in gluster volume heal $vol info (and info split-brain, info heal-failed) that gives you any clues?
16:21 JoeJulian ccha: btw... that's the failed result of an operation. Pasting a larger sample set of log data into fpaste.org might give a better picture of what's going on.
16:22 JoeJulian sjoeboo: That's an interesting feature...
16:22 sjoeboo yeah, i hit this before
16:22 sjoeboo glusterd.info changed uuid
16:22 sjoeboo reverting them and restarting glusterd works
16:23 sjoeboo or, did, i'm doing that now...
16:24 JoeJulian kkeithley_: !!! rpm -ql glusterfs-server | grep info
16:25 JoeJulian /var/lib/glusterd/glusterd.info
16:26 JoeJulian sjoeboo: Crap. That's going to bring a lot of people in here frustrated....
16:29 JoeJulian Wierd, though. I wonder how that ended up in the build tree.
16:29 T0aD any tip on setting 10,000 quotas in a snap ?
16:29 daMaestro JoeJulian, likely from the tests?
16:29 daMaestro JoeJulian, who built that build with issues? what channels is it in?
16:29 JoeJulian I didn't think those got run on a koji build.
16:30 daMaestro JoeJulian, what build is that?
16:32 daMaestro umm, that file is specifically ghosted
16:32 daMaestro created, and then ghosted
16:33 JoeJulian kkeithley_ did. I'm pretty sure it's only in Fedora 19+
16:33 * JoeJulian throws Kaleb under the bus...
16:33 daMaestro yeah, i'm not seeing it in the built 3.4.0-1.el6 rpms
16:33 JoeJulian Gah, foo
16:33 * JoeJulian throws himself under a bus...
16:33 JoeJulian I was looking at the wrong machine...
16:33 JoeJulian ghosted?
16:33 * T0aD parks the bus 'i think i hit a cat there'
16:34 JoeJulian So it can show up in -ql but not actually be in it?
16:34 T0aD 1736 quotas set in 60 min.
16:35 daMaestro JoeJulian, yes, that is a ghost
16:35 daMaestro the rpm owns the file, but ships no data
16:35 JoeJulian T0aD: If it were me, I'd add 1, see what gets changed in /var/lib/glusterd/* and extrapolate.
16:35 JoeJulian Ah, ok. Whew.
16:35 T0aD yeah
16:35 T0aD im planning to strace
16:35 daMaestro and looking at the latest builds, they are all ghosted
16:36 JoeJulian So sjoeboo's problem was self-inflicted.
16:36 JoeJulian Is there a way to tell from an rpm query, or is that just in the spec file?
16:37 daMaestro it might have been the *.upgrade code patch
16:37 sjoeboo yeah, our build system may have gottenin the way w/ the way it namess our rpms, as far as rpm was sooncerned, going from 3.4.0beta4 -> 3.4.0-1 isn't  an "upgrade"
16:37 daMaestro path*
16:38 daMaestro sjoeboo, what is the exact versions you were going between so i can make sure the specs treat glusterd.info correctly please?
16:39 sjoeboo 3.4.0beta4-1.el6 to 3.4.0-1.el6
16:39 daMaestro JoeJulian, what do you mean?
16:40 daMaestro i don't see a release version -1, only -0.9
16:40 daMaestro is that a typeo?
16:40 daMaestro glusterfs-3.4.0-0.9.beta4.el6
16:40 rastar joined #gluster
16:40 daMaestro there is also 0.8 and 0.9 builds for beta4, which one were you using?
16:41 NuxRo guys, what options do i have to auth UFO users? Can I do it against an existing mysql db for example?
16:41 JoeJulian Why is the .info file, a program state file, included even as a %ghost?
16:42 daMaestro i gamble it's to ensure permissions, but i don't know specifically
16:42 daMaestro ah
16:42 daMaestro # This is really ugly, but I have no idea how to mark these directories in an
16:43 daMaestro # other way. They should belong to the glusterfs-server package, but don't
16:43 daMaestro # exist after installation. They are generated on the first start...
16:43 JoeJulian Or, should the glusterd.info be considered a configuration file? Should it be in /etc?
16:43 daMaestro it's not just .info and they are in the correct location in sharedstate
16:44 daMaestro http://fpaste.org/25733/93040137/
16:44 glusterbot Title: #25733 Fedora Project Pastebin (at fpaste.org)
16:45 kaptk2 joined #gluster
16:45 daMaestro hmm a lot of that sticks out as suspect
16:45 JoeJulian But we don't want glusterd.info deleted on an uninstall.
16:45 daMaestro kkeithley, what is the logic behind these %ghosts?
16:46 daMaestro rather, why does the rpm have to own these? uninstall cleanup? version migration cleanup?
16:46 duerF joined #gluster
16:47 JoeJulian There are times when a file should be owned by the package but not installed - log files and state files are good examples of cases you might desire this to happen. The way to achieve this, is to use the %ghost directive. By adding this directive to the line containing a file, RPM will know about the ghosted file, but will not add it to the package.
16:47 daMaestro JoeJulian, i don't know why things are this way, we'll have to wait for kkeithley
16:47 daMaestro right, i'm aware of why to use %ghost
16:48 JoeJulian [The file] will be added to the rpm database, as we can see from querying the file, however it is not visible from a package listing, but as it is owned by the package it will be removed when the package is removed.
16:48 daMaestro okay so it sounds like it is for cleanup operations
16:48 JoeJulian Sorry, not trying to be pedantic.
16:49 daMaestro it's also a packager decision that was not mine ;-)
16:49 daMaestro so we are just guessing
16:49 daMaestro however, i do not believe it would have caused sjoeboo's issue
16:49 JoeJulian Holy crap... time has gotten away from me. I'm out for a while.
16:50 daMaestro unless we see %ghost being treated differently in post
16:50 JoeJulian Oh, sure. What he's saying is that he uninstalled his package that wouldn't upgrade (probably self-built I'm guessing). Then when he installed the new package, his glusterd.info file was gone and a new one with a new uuid was created.
16:51 daMaestro oh, well yeah
16:51 daMaestro psh
16:51 daMaestro sorry for not reading the full issue. yes, that would cause rpm to remove the info file
16:51 daMaestro but not during a normal upgrade path
16:51 JoeJulian Right.
16:52 daMaestro however, as glusterd.info is more then just a state file (it's contents is actually important for the functioning of the cluster) it should likely be marked as conf noirelace
16:52 JoeJulian There are occasions where I've uninstalled and reinstalled the package for reasons I can no longer remember. I'd like that info to at least be saved as an .rpmsave
16:53 daMaestro %config(noreplace)
16:54 _pol joined #gluster
17:06 shawnlopresto joined #gluster
17:07 Debolaz joined #gluster
17:08 shawnlopresto Anyone around who may be able to help with an odd geo-replication issue? Scoured the docs, forums, etc and see no instances of it anywhere. Im running 3.3.1 on cent6.4.
17:09 shawnlopresto Seeing "E [resource:194:logerr] Popen: /usr/libexec/glusterfs/gsyncd> [2013-07-16 13:07:41.909518] W [rpc-transport.c:174:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"" in the logs after starting geo-rep. It immediately goes faulty
17:09 shawnlopresto Was working up until about a month ago. Just now getting around to looking at it today.
17:09 shawnlopresto All transport types are set to tcp on each volume
17:10 badone joined #gluster
17:10 tqrst joined #gluster
17:10 dblack joined #gluster
17:10 DataBeaver joined #gluster
17:10 stickyboy joined #gluster
17:10 arusso joined #gluster
17:10 semiosis joined #gluster
17:10 JordanHackworth joined #gluster
17:10 jtriley joined #gluster
17:10 ofu_ joined #gluster
17:10 hflai joined #gluster
17:10 hagarth__ joined #gluster
17:13 joelwallis joined #gluster
17:22 kkeithley_ There is no actual /var/lib/glusterd/glusterd.info
17:26 kkeithley_ in the rpm. It's a %ghost in the spec file so that something like an `rpm -q --whatprovides /var/lib/glusterd.info` will say it's owned by the glusterfs-server RPM
17:26 kkeithley_ so what exactly is the concern?
17:27 juhaj_ joined #gluster
17:27 samppah_ joined #gluster
17:27 cicero_ joined #gluster
17:27 daMaestro kkeithley_, when the rpm is removed, so is that file
17:28 abyss^__ joined #gluster
17:28 MinhP joined #gluster
17:28 daMaestro however, the end-user that had issues did an uninstall and then install, not an upgrade
17:28 NeonLich1 joined #gluster
17:28 daMaestro so the rpm did as was expected, it's just more maybe we should protect glusterd.info more
17:30 kkeithley_ hmm. yes. I don't specifically remember who provided the %ghost files. Maybe ndevos remembers.
17:31 kkeithley_ And BTW, 3.4.0-1.fc19 is waiting to be pushed to updates-testing.
17:33 RobertLaptop joined #gluster
17:33 kkeithley_ so probably in %post server we should save a /var/lib/glusterd/glusterd.info.rpmsave
17:34 kkeithley_ and/or we should tag it as a %config
17:35 bsaggy joined #gluster
17:37 kkeithley_ hmmm, great, www.rpm.org docs only say that %config files get additional processing, but I guess what the additional processing is is a secret.
17:39 haidz joined #gluster
17:40 wgao__ joined #gluster
17:40 aliguori joined #gluster
17:40 kkeithley_ hmm, it's been that way since 3.3.1 at least
17:42 kkeithley joined #gluster
17:50 kkeithley_ @metric
17:50 glusterbot kkeithley_: I do not know about 'metric', but I do know about these similar topics: 'Joe's performance metric'
17:50 kkeithley_ @Joe's performance metric
17:50 glusterbot kkeithley_: nobody complains.
17:50 kkeithley_ yeah
17:58 chirino joined #gluster
17:59 samppah_ :O
18:03 glusterbot New news from newglusterbugs: [Bug 985074] Rename of unlinked hard link fails <http://goo.gl/gFFKJ>
18:06 semiosis :O
18:08 tqrst O:
18:08 hagarth :O
18:18 jebba joined #gluster
18:20 skyw joined #gluster
18:30 Recruiter joined #gluster
18:33 glusterbot New news from newglusterbugs: [Bug 985085] Install virt.group file into /var/lib/glusterd/groups/ <http://goo.gl/PzPu9>
18:41 neofob FYI, I just tried http://goo.gl/gFFKJ Bug 985074 on my Debian Wheezy running kernel 3.9.9; it works fine
18:41 neofob I wonder if it is the kernel version issue (RHEL 6.4 runs 2.6.32, IIRC)
18:41 glusterbot Title: Bug 985074 Rename of unlinked hard link fails (at goo.gl)
18:52 mooperd joined #gluster
19:02 soukihei_ how does gluster handle concurrent write operations to the same file on a single brick?
19:09 chirino joined #gluster
19:18 chirino joined #gluster
19:27 puebele1 joined #gluster
19:29 plarsen joined #gluster
19:43 semiosis soukihei: you tell us.
19:44 semiosis soukihei: it should handle that fine, with the FUSE client.  if not, that's a bug
19:44 soukihei yeah, I haven't seen a problem with my distributed-replicated environment. I was asking more to understand how it is working under the hood.
19:46 semiosis oh i see, i thought you were asking how *well* it handled it
19:46 soukihei no, just an informational query
19:46 soukihei poorly phased
19:50 _pol joined #gluster
19:56 semiosis i dont think xattrs are used for that, so probably some state held in memory in the brick glusterfsd processes
19:57 semiosis just guessing tho
19:57 daMaestro joined #gluster
20:03 soukihei ok
20:03 jag3773 joined #gluster
20:04 jdarcy joined #gluster
20:06 soukihei I reading a paper on Ceph right now, which is POSIX compliant. Is Gluster POSIX compliant? If it is, then I believe I have a handle on how it does concurrent writes
20:06 tjstansell joined #gluster
20:07 soukihei just found the answer to that
20:07 soukihei it is
20:16 mtrythall joined #gluster
20:17 semiosis mtrythall: welcome
20:18 mtrythall :)
20:18 mtrythall howdy
20:18 semiosis glusterbot: op
20:18 semiosis mtrythall: ^
20:19 mtrythall neat!
20:19 piotrektt hey. is this info in release notes about libgfapi for samba accurate. on one page it says samba can connect by libgfapi on other its not even mentioned.
20:19 semiosis @3.3 release notes
20:19 glusterbot semiosis: I do not know about '3.3 release notes', but I do know about these similar topics: '3.4 release notes'
20:19 semiosis @3.4 release notes
20:19 glusterbot semiosis: http://goo.gl/AqqsC
20:20 aknapp joined #gluster
20:20 piotrektt so, how can one use to acess glusterfs without fuse with samba?
20:21 semiosis i found this PDF... http://www.gluster.org/community/documenta​tion/images/3/33/SMB-GlusterDevMar2013.pdf
20:21 glusterbot <http://goo.gl/P125u> (at www.gluster.org)
20:21 semiosis on google
20:23 semiosis piotrektt: further googling revealed this: https://forge.gluster.org/samba​-glusterfs/samba-glusterfs-vfs
20:23 glusterbot <http://goo.gl/zTkuo> (at forge.gluster.org)
20:23 tjstansell hey folks. so i'm trying to do a live upgrade from a 3.3.2qa1 to 3.4.0 with a 2-node replication config.
20:23 piotrektt semiosis, yup ive been thee but it seems like this option is not developed yet
20:24 semiosis aww
20:24 piotrektt it says incubating
20:24 T0aD we need to restart glusterfs deamons to reload the configuration files ?
20:24 tjstansell i took one node down entirely, upgraded the software, thens started up gluster on that node.
20:24 piotrektt those release notes are really confusing :)
20:24 semiosis T0aD: usually you shouldn't be editing files, but yes if you do probably need to restart the respective daemon
20:25 jskinner_ joined #gluster
20:25 T0aD ok there is no reload no nothing
20:25 semiosis tjstansell: ,,(3.4 upgrade notes)
20:25 T0aD well like the old days
20:25 glusterbot tjstansell: http://goo.gl/SXX7P
20:25 tjstansell the fuse mounts don't seem to be picking up the brick port changes
20:25 T0aD semiosis, sorry, i cant set 10,000 quota limits in an appropriate time
20:25 T0aD and im looking to be able to set 200,000
20:25 jskinner_ I seem to be having issues with Gluster 3.4, running on CentOS 6.3 fuse clients mounting qcow2 files with cache=none
20:26 tjstansell semiosis: yes, i'm basically attempting "B" from that page and the "client" is not picking up the new port numbers for the bricks that got upgraded.
20:26 T0aD it seems the volumes configuration files are the same on every brick, which is very good
20:27 tjstansell i can trace the process and it's still trying to connect to port 24009 and 24010 for the two bricks that are on the upgraded server, rather than the new ports 49152 and 49153.
20:27 tjstansell does a port change require remounting the client to pick up the updated config?
20:27 jskinner_ I have tried specifying direct-io-mode=enable in fstab, but I am still having the issue. Is there something I need to set on the volume itself?
20:27 tjstansell which sort of negates the ability to do a rolling upgrade ...
20:28 semiosis tjstansell: have you upgraded all of your servers already?
20:28 semiosis client may be getting old port numbers from a server that hasnt been upgraded yet, maybe
20:28 tjstansell only 1 of two ... each server is also a client.
20:28 semiosis just guessing
20:28 semiosis ah that's tricky
20:29 semiosis i think you should upgrade all servers before any clients
20:29 semiosis and when servers *are* also clients, that's like not possible :/
20:29 tjstansell if i do 'gluster volume status' on host01 (the one that hasn't been upgraded) i see the correct port numbers for all bricks.
20:30 semiosis well then i'm at a loss
20:32 tjstansell i guess i'm not sure how this would be any different than if server A and B were both being upgraded ... and you had a separate client C.
20:32 tjstansell after upgrading server A, it's bricks now have different port numbers.  how does client C know those have changed?
20:33 tjstansell is there a way to refresh a client's volume config? or is that supposed to happen automatically somehow?
20:33 tjstansell so that it also knows if new servers are added, bricks removed, etc?
20:33 semiosis tjstansell: you could try flipping an option, like client-log-level, or something
20:37 tjstansell that didn't seem to help.
20:37 tjstansell i can't just HUP the process or anything can I?
20:38 semiosis make two new client mounts, one to the local server & one to the remote server, check their logs, see if either, both, or neither, gets the right ports
20:38 semiosis that's at least diagnostic of the problem
20:38 tjstansell true...
20:38 tjstansell i'll check
20:39 tjstansell creating a new mount on host01, which talks to host01 for it's config, connects to the new port number for the brick on host02.
20:40 tjstansell so re-mounting does appear like it would solve this problem.
20:40 tjstansell which would indicate that the existing client glusterfs process is not picking up any changes to the config.
20:41 semiosis tjstansell: one thing i might try, though could kill your existing client, would be breaking its 24007 connection
20:41 semiosis maybe it will reconnect & recover, maybe it will die, maybe other things will happen, i dont know
20:41 tjstansell heh...
20:51 tjstansell well, i tried sending a HUP to the glusterfs process, and it did trigger it to talk to the server and this was in the logs: "Fetching the volume file from server...", then "0-glusterfs: No change in volfile, continuing"
20:51 tjstansell which is interesting
20:52 tjstansell but i'm wondering if that's referring to the <vol>-fuse.vol file rather than anything that contains the actual port numbers for each brick
21:03 neofob left #gluster
21:06 tjstansell looking at glusterfs_volfile_reconfigure() in glusterfsd-mgmt.c, it seems it only reconfigure things if the 'graph' has changed, not if just options have changed.
21:07 tjstansell so i think it would take adding/removing bricks or something on that scale to get a client to renegotiate with each server....
21:07 tjstansell but then, i only skimmed the code and could very easily not be understanding things :)
21:15 rcoup joined #gluster
21:20 tqrst 3.3.1: I was moving a brick from $mnt to $mnt/brick. To do so, I killed the corresponding glusterfsd process, wiped all xattrs from $mnt, moved everything under $mnt to $mnt/brick, called 'gluster volume replace-brick bigdata ml26:$mnt ml26:$mnt/brick commit force'... and then everything segfaulted. Everything as in every client and server on all machines.
21:20 tqrst After restarting everything under the sun, some of the servers refused to launch glusterd because 'Unable to retrieve store handle for /var/lib/glusterd/vols/bigdata/br​icks/ml26:-mnt-donottouch-localb, error: No such file or directory'. They seem to be pointed to the old brick path rather than .../brick. How do I fix this mess?
21:23 tqrst it's tempting to jump straight to 3.4 at this point since everything's down anyway, and hope for whatever is causing this instability to be fixed
21:39 tjstansell well, while searching for similar bugs, i think this behavior is related to bug 960285
21:39 glusterbot Bug http://goo.gl/8sKao unspecified, unspecified, ---, kparthas, ASSIGNED , Client PORTBYBRICK request for replaced-brick will retry forever because brick port result is 0
21:39 tjstansell in that case, it was a replace-brick but i think the core problem is the lack of the client being able to pick up brick port number changes
21:43 tqrst how can I file a bug against the gluster website? There is no web component on bugzilla.
21:43 glusterbot http://goo.gl/UUuCq
21:44 tqrst (as in, there is a bug in the website and I want to report it)
21:44 tqrst I'll just use unclassified for now
21:47 [o__o] joined #gluster
21:47 mtrythall howdy [o__o]!
21:48 mtrythall Hello everyone! This channel is now being logged by BotBot.me. You can view the logs here: https://botbot.me/freenode/gluster/
21:48 T0aD hi botbot ! thanks for spying !
21:48 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
21:49 T0aD sexy web interface
21:49 jskinner_ Ok so I lied, my client nodes are actually CentOS 6.2; does any one know if direct_io fuse supported in CentOS 6.2?
21:50 tqrst certainly looks nicer than perlgeek's
21:50 T0aD i feel secure they use https to display irc logs
21:51 T0aD makes me want to open up
21:51 tqrst either way, I'm still stuck with dozens of coredumps in / and no gluster running
21:51 tqrst P(things blowing up | dinner is almost ready) >> P(things blowing up | dinner is not almost ready)
21:52 mtrythall semiosis: ping us in #lincolnloop if you have any problems :)
21:53 JoeJulian jskinner_: It's right on the cusp. I can't remember exactly when the fuse patch went in to make that work.
21:54 ujjain joined #gluster
21:54 T0aD hmm it seems gluster's not loading the quotas and the client cannot mount anymore gfs
21:57 jskinner_ is that fuse patch from Gluster, or from CentOS
21:58 JoeJulian It's from the kernel, so packaged by Red Hat and re-packaged by CentOS.
21:58 jskinner_ aha, so it would be my kernel version that ultimately supports the direct_io feature
21:59 jebba joined #gluster
22:00 jskinner_ once I have everything in place, is the only thing I need to do is specify direct io on volume mount? I shouldn't have to do anything to volume config on server for direct_io would I?
22:00 mtrythall left #gluster
22:01 JoeJulian jskinner_: I think that's correct. Never had a need to care, so I don't have a lot of details around that.
22:01 jskinner_ ok cool
22:06 glusterbot New news from newglusterbugs: [Bug 985131] bugs.gluster.com should redirect to bugzilla.redhat.com, preferably redirecting bug ids too <http://goo.gl/R1Sn1>
22:07 semiosis glusterbot: op
22:07 tjstansell left #gluster
22:08 Topic for #gluster is now Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
22:10 jskinner_ awesome, looks like support is available, I just need to do a yum update kernel
22:10 semiosis new channel logs
22:10 semiosis https://botbot.me/freenode/gluster/
22:10 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
22:15 mooperd joined #gluster
22:16 aliguori joined #gluster
22:47 l0uis_ joined #gluster
22:47 mriv_ joined #gluster
22:47 aliguori joined #gluster
22:47 jebba joined #gluster
22:47 ujjain joined #gluster
22:47 daMaestro joined #gluster
22:47 _pol joined #gluster
22:47 puebele1 joined #gluster
22:47 chirino joined #gluster
22:47 Recruiter joined #gluster
22:47 kkeithley joined #gluster
22:47 wgao__ joined #gluster
22:47 NeonLicht joined #gluster
22:47 abyss^__ joined #gluster
22:47 cicero_ joined #gluster
22:47 samppah_ joined #gluster
22:47 juhaj joined #gluster
22:47 shawnlopresto joined #gluster
22:47 duerF joined #gluster
22:47 zaitcev joined #gluster
22:47 premera joined #gluster
22:47 ultrabizweb joined #gluster
22:47 hagarth joined #gluster
22:47 bennyturns joined #gluster
22:47 NeatBasis_ joined #gluster
22:47 tru_tru joined #gluster
22:47 atrius joined #gluster
22:47 fcami joined #gluster
22:47 Peanut joined #gluster
22:47 nixpanic joined #gluster
22:47 edong23 joined #gluster
22:47 glusterbot joined #gluster
22:47 Technicool joined #gluster
22:47 cfeller joined #gluster
22:47 ke4qqq joined #gluster
22:47 rwheeler joined #gluster
22:47 xavih joined #gluster
22:47 mtanner_ joined #gluster
22:47 pjameson joined #gluster
22:47 fleducquede joined #gluster
22:47 sac joined #gluster
22:47 msvbhat joined #gluster
22:47 Humble joined #gluster
22:47 It_Burns joined #gluster
22:47 klaxa joined #gluster
22:47 cyberbootje joined #gluster
22:47 foster joined #gluster
22:47 Cenbe joined #gluster
22:47 zwu joined #gluster
22:47 JusHal joined #gluster
22:47 _br_ joined #gluster
22:47 gmcwhistler joined #gluster
22:47 ThatGraemeGuy joined #gluster
22:47 masterzen joined #gluster
22:47 GabrieleV joined #gluster
22:47 ingard__ joined #gluster
22:47 chlunde joined #gluster
22:47 portante joined #gluster
22:47 yosafbridge joined #gluster
22:47 _Bryan_ joined #gluster
22:47 johnmark joined #gluster
22:47 penglish joined #gluster
22:47 JoeJulian joined #gluster
22:47 SteveCooling joined #gluster
22:47 stigchristian joined #gluster
22:47 theron joined #gluster
22:47 ndevos joined #gluster
22:47 jbrooks joined #gluster
22:47 Dave2 joined #gluster
22:47 jones_d joined #gluster
22:47 lyang0 joined #gluster
22:47 twx joined #gluster
22:47 haakon_ joined #gluster
22:47 social_ joined #gluster
22:47 lanning joined #gluster
22:47 gluslog joined #gluster
22:47 al joined #gluster
22:47 matiz joined #gluster
22:47 T0aD joined #gluster
22:47 bivak joined #gluster
22:47 sonne joined #gluster
22:47 georgeh|workstat joined #gluster
22:47 js_ joined #gluster
22:47 morse joined #gluster
22:47 ninkotech__ joined #gluster
22:47 Avatar[01] joined #gluster
22:47 JonnyNomad joined #gluster
22:47 Ramereth joined #gluster
22:47 rnts joined #gluster
22:47 the-me joined #gluster
22:47 purpleidea joined #gluster
22:47 bfoster joined #gluster
22:47 Kins joined #gluster
22:47 soukihei joined #gluster
22:47 johnmorr_ joined #gluster
22:47 madd joined #gluster
22:47 Shdwdrgn joined #gluster
22:47 tjikkun_ joined #gluster
22:47 frakt_ joined #gluster
22:47 VeggieMeat joined #gluster
22:47 jiqiren joined #gluster
22:47 furkaboo_ joined #gluster
22:47 jiffe98 joined #gluster
22:47 eightyeight joined #gluster
22:47 bdperkin joined #gluster
22:47 tw joined #gluster
22:47 Oneiroi joined #gluster
22:47 paratai joined #gluster
22:47 pull_ joined #gluster
22:47 mrEriksson joined #gluster
22:47 irk joined #gluster
22:47 Goatbert joined #gluster
22:47 Gugge joined #gluster
22:47 sysconfi- joined #gluster
22:47 a2 joined #gluster
22:47 ccha joined #gluster
22:47 stopbit joined #gluster
22:47 atrius` joined #gluster
22:47 efries_ joined #gluster
22:47 zykure joined #gluster
22:47 xymox joined #gluster
22:47 m0zes joined #gluster
22:47 lkoranda joined #gluster
22:47 mynameisbruce joined #gluster
22:47 nightwalk joined #gluster
22:47 avati joined #gluster
22:47 [o__o] joined #gluster
22:47 ujjain joined #gluster
22:47 ujjain joined #gluster
22:49 GLHMarmo1 joined #gluster
22:50 JoeJulian semiosis: Thanks for pointing me at gitlab. I've been using it a lot lately.
22:50 semiosis yw, me too
22:50 T0aD nice
22:50 T0aD its like a personal github you can install on your server ?
22:50 semiosis yes
22:51 T0aD cool.
22:51 semiosis except it's all about the private repo & team
22:51 T0aD most git viewers are crappy
22:51 semiosis so not for hosting public repos for the world to collab on
22:51 T0aD im not sure to understand what you mean
22:51 semiosis but for internal projects it's amazing
22:51 [o__o] joined #gluster
22:51 T0aD unless you mean you need an account on it to create a git
22:52 T0aD in which case its pretty much the same as github
22:52 semiosis even to just browse code, issues, wiki, you need an account
22:52 semiosis without an account all you can do is git clone/pull
22:52 semiosis no web ui at all
22:52 semiosis besides that it's pretty much the same
22:53 T0aD looks sexy
22:53 semiosis we've built an entire PaaS around it, and the uptime has been waaaay better than github
22:54 T0aD http://www.bpaste.net/show/uEPhkmJi3ShktREYpthO/ <- i bet thats not the correct way to use features.limit-usage
22:54 glusterbot <http://goo.gl/r2nvJ> (at www.bpaste.net)
22:54 T0aD i feel like the quota needs some improvment
22:56 xdexter joined #gluster
22:56 JoeJulian You could have tested that before GA... ;)
22:57 xdexter Hello, its possible use Gluster with Aws S3?
22:58 semiosis xdexter: in theory, maybe.  in practice, i doubt it.
22:58 JoeJulian S3 does not present a posix filesystem.
22:58 semiosis JoeJulian: there are (lame) adapters
22:58 semiosis but they dont work in my experience
22:59 xdexter right...
22:59 JoeJulian I don't believe any that I've seen are fully posix compliant.
22:59 semiosis fair enough
23:01 semiosis xdexter: why?
23:03 xdexter I need a solution to replicate volumes with my S3, tried with rsync and s3cmd, however as I have many files and 100GB of information about this process takes too long.
23:03 xdexter only to compare the files rsync takes about 3 hours before starting the copy
23:04 T0aD xdexter, i have a massive structure of files to check for regular backups
23:04 xdexter and is a Web directory, then synchronization must be constant
23:04 semiosis xdexter: are you using ebs or ephemeral for your brick storage?
23:04 xdexter I do not even use Gluster
23:04 semiosis oh i was going to suggest ways to get brick disk images into s3
23:04 T0aD what i do is i scan the top directories (here its home directories): just checking the ctime of the subdirectories tree and retrieving the most recent one
23:04 semiosis i se
23:04 semiosis e
23:04 xdexter semiosis, raid10 with 4  EBS
23:05 T0aD so i scan quick and i know what to rsync / rdiff-backup everyday
23:06 xdexter T0aD, would not be much of a backup, because I have to synchronize directories a maximum in 5 minutes
23:06 semiosis xdexter: i wrote a (cheezy) python script that can upload a disk image from an ec2 instance to s3
23:06 semiosis probably wont get what you want
23:06 T0aD xdexter, yeah thats definitely short
23:06 xdexter semiosis, yes ;/
23:07 xdexter T0aD, can you give me an example?
23:07 T0aD maybe you can have other ways to know what changes, hell the limit is your creativity
23:07 xdexter please
23:07 T0aD an example of what ? i described the whole procedure ?
23:07 T0aD other things i do is scan ftp logs / mysql binary logs to detect changes
23:08 xdexter T0aD, problem is: I know what to change or not I need to compare with the files from S3, and it is this process that takes, you know?
23:09 * semiosis wonders if geo-sync could do an s3 slave
23:09 T0aD i dont understand what you need exactly or whats your context
23:09 semiosis that would be neat
23:09 semiosis s/geo-sync/geo-rep/
23:09 glusterbot What semiosis meant to say was: * semiosis wonders if geo-rep could do an s3 slave
23:09 xdexter semiosis, me too!
23:10 semiosis pretty sure it can't now, but a lot of that code is python, maybe it could be extended
23:10 T0aD s/too/neither/
23:10 semiosis the python aws library (boto) is excellent
23:10 T0aD glusterbot, you punk!
23:10 glusterbot T0aD: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
23:10 semiosis ha
23:10 xdexter T0aD, I have about 2 million files, by comparing them to find the Rsync modification is that it takes, you know?
23:10 T0aD xdexter, yes, and i proposed you my solutions
23:10 T0aD not sure you understood it though
23:11 xdexter T0aD, I understand, but do not think you go solve the problem ...
23:11 T0aD well it solves mine with millions of files
23:12 jag3773 joined #gluster
23:12 T0aD a lil 10M on a 250 GB mechanical drive :D
23:13 xdexter right, then you say you use rsync only files modified to X times, right?
23:14 T0aD i have one source (1 hdd) and one destination (the backup)
23:14 T0aD the files in my case only change at the source
23:15 T0aD i spread / simplify the main work of solutions like rsync (scanning what changes) by dividing the sources (in my case 12,000 websites)
23:16 xdexter T0aD, right, my question is: how do you know that the file has changed? you compare with what?
23:16 T0aD every 10 minutes or so, i scan those directories (home directories of those websites) - well the ones that show changes in the last month - to retrieve the biggest ctime (only stat()ing directories
23:16 T0aD i have a db where i store the last backup date and the last changed date
23:16 T0aD then i launch rsync (well rdiff-backup) on directories where last_ctime > last_backup
23:16 xdexter hmmm
23:17 xdexter understand
23:17 T0aD and i put out of the scanning loop directories that didnt change in a month
23:17 T0aD i put them back in when i detect some change from an external access (in my case FTP for instance)
23:18 T0aD maybe not the best solution, but its mine and its working great so far :)
23:19 xdexter understand
23:20 T0aD now lets agree im a genius :P
23:20 xdexter kkkk
23:21 T0aD i wish there was some doc to tell us how to manually feed quotas
23:21 JoeJulian Well, you know what they say about wishes...
23:21 T0aD they only realize at xmas ? :P
23:25 xdexter T0aD, you've used git for it?
23:25 T0aD xdexter, git for what ?
23:25 xdexter T0aD, in place rsync to synchronize
23:25 T0aD but its not published if thats what you re asking
23:26 T0aD its private and totally customized to my architecture.. so wouldnt help much
23:27 xdexter but I think he'll ro problem problem that rsync, compare all files ..
23:27 T0aD yeah thats a huge process that should be divided
23:28 xdexter divided as you speak?
23:28 xdexter with your solution?
23:28 T0aD yeah
23:28 T0aD well thats my way of solving it
23:29 xdexter ok
23:40 tqrst does anyone have a volume that was created under 3.4 handy? I'd appreciate it if you could post the output of 'find /var/lib/glusterd/vols' somewhere. I'm trying to see how much of what I have in mine is old cruft accumulated through various updates vs. stuff that should actually be there.
23:41 T0aD http://www.bpaste.net/show/TgsoiiNgTUEPSuENo4yt/
23:41 glusterbot <http://goo.gl/gOJzB> (at www.bpaste.net)
23:42 tqrst T0aD: thanks!
23:42 T0aD np
23:43 rcoup joined #gluster
23:45 xdexter T0aD, you use any software to their customers management? cpanel? plesk?
23:46 T0aD just mine
23:47 xdexter own solution you say?
23:47 T0aD yeah i developped my own cpanel
23:47 T0aD couldnt find what i needed or was very costly
23:47 xdexter ah roght
23:47 xdexter right
23:47 xdexter good
23:48 tqrst @paste
23:48 glusterbot tqrst: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
23:50 badone joined #gluster
23:53 xdexter T0aD, do you use atime, ctime or mtime?
23:53 xdexter ;D
23:53 T0aD ctime
23:53 T0aD im using find but i made 2 programs to do the same
23:54 xdexter ok
23:54 T0aD http://bpaste.net/show/CMQMQyBaOsmVRqt8erd2/ http://bpaste.net/show/BRing03fH85w6CQ2z9JG/
23:54 glusterbot Title: Paste #CMQMQyBaOsmVRqt8erd2 at spacepaste (at bpaste.net)
23:55 xdexter thanks

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary