Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 dhsmith_ joined #gluster
00:07 dbruhn If you edit the vol files, any change you make to the system by running the gluster commands will overwrite them
00:10 voronaam Well, I can't even edit the vol file to achieve what I was going to
00:10 voronaam What is the proper way to add a translator?
00:15 awheele__ joined #gluster
00:31 JoeJulian voronaam: There's an undocumented "filter" folder that you can use shell scripts to achieve things like that.
00:31 JoeJulian voronaam: Also, nfs is not faster, it's just less accurate.
00:31 sprachgenerator joined #gluster
00:32 voronaam wow, may I ask what do you mean by "less accurate", please? :)
00:33 semiosis voronaam: use 'gluster volume set' to configure xlators.  see 'gluster volume set help' for main options and you may also be interested in the ,,(undocumented options)
00:33 glusterbot voronaam: Undocumented options for 3.4: http://goo.gl/Lkekw
00:33 semiosis editing volfiles is not recommended
00:33 semiosis those docs are from a time long ago
00:33 semiosis gotta run, good luck
00:33 JoeJulian The kernel nfs client uses the kernel FSCache. This keeps it from doing things like self-heal checks and stat updates.
00:33 voronaam Thank you!
00:33 voronaam Ah, yes, I noticed that reread test was crazy fast
00:34 voronaam Matching local FS speed
00:34 JoeJulian NFS lookups will be faster as they're just accessing a cache. If you only had one client, that would be better. If you have multiple, now you're playing a guessing game.
00:34 JoeJulian The throughput of nfs is actually much slower for bulk operations.
00:35 JoeJulian ... and, of course, you have the loss of redundancy.
00:35 voronaam Well, I tested up to 4Gb files, being written and read in chunks from 1K to 8K. And NFS was always faster. Not by much on bulk operations though
00:36 JoeJulian Ok, now run 100 clients and do that...
00:36 voronaam Good point
00:37 JoeJulian I'm just pointing out that clustered systems aren't always as straight-forward as you're used to with local disk performance metrics.
00:37 JoeJulian That's why I use the patented ,,(Joe's performance metric)
00:37 glusterbot nobody complains.
00:38 voronaam Sure. The only massively distributed system I used before was Cassandra...
00:38 JoeJulian I like to point out that it's like comparing Apples to Orchards.
00:39 voronaam In any case, GlusterFS is up and running and matches the needs I have for it at the moment.
00:39 JoeJulian excellent
00:39 voronaam The issues I have are all nice-to-have. So, really happy the way it worked out so far.
00:40 voronaam Thank you very much for the support.
00:40 JoeJulian You're welcome. :)
00:40 voronaam If I grow to understand it better, I'll contribute to documentation ;)
00:40 JoeJulian Awesome. They totally need help in that realm.
01:03 asias joined #gluster
01:06 dhsmith joined #gluster
01:06 dhsmith joined #gluster
01:10 glusterbot New news from resolvedglusterbugs: [Bug 927146] AFR changelog vs data ordering is not durable <http://goo.gl/jfrtO>
01:10 nightwalk joined #gluster
01:11 bala joined #gluster
01:12 asias_ joined #gluster
01:18 nueces joined #gluster
01:26 kevein joined #gluster
01:32 RicardoSSP joined #gluster
01:51 vlad___ joined #gluster
01:51 vlad___ anyone here able to answer some basic gluster storage questions?
01:53 JoeJulian Yes, it does require that you use Linux.
01:53 vlad___ well, i'm certainly not a windows person so that should help. haha
01:53 JoeJulian Sorry, was just trying to guess at the question.
01:53 JoeJulian :P
01:53 vlad___ haha, it's all good!
01:54 vlad___ so, i'm trying to get into Gluster Storage, but i've been a TSM guy since i got into storage
01:54 JoeJulian I thought about being a futher smart-ass and suggesting that I can only answer the very complex questions, not the basic ones... ;)
01:54 vlad___ haha, i appreciate the sympathy
01:55 vlad___ are you familiar with Tivoli Storage Manager at all so I can ask comparison questions so i can better understand gluster? because i don't have a firm grasp on gluster at all right now
01:55 JoeJulian No, in fact I was googling TSM to try to figure out what you were talking about.
01:56 vlad___ i feel like you're joking, but i'm not certain because it's hard to pick up sarcasm over text chat haha.
01:57 JoeJulian Nope. I've always been an open-source guy. Never encountered Tivoli.
01:57 vlad___ ahh, ok. no biggie. i'll try to explain myself as i ask a question then, if it doesn't work..well, we tried
01:58 harish joined #gluster
01:59 JoeJulian Hopefully their storage works better than their web site: (from the whitepaper link: http://goo.gl/A2TfZS )
01:59 glusterbot Title: IBM 2013/08/13 19:59:04 (at goo.gl)
01:59 JoeJulian "This service is temporarily unavailable. Please try again later. message code: 40" <sigh>
02:00 vlad___ haha, sounds like IBM has some issues to work out then
02:00 awheeler joined #gluster
02:00 JoeJulian Looks a lot like HDFS.
02:01 vlad___ well, i found out now that Gluster is really for High Availability vs storing versions of data
02:01 JoeJulian Right
02:01 vlad___ i think a lot of my questions have been answered..Gluster is kind of like a glorified NAS device right?
02:01 chjohnst_work do you mean GPFS? TSM is backup software
02:02 JoeJulian Actually, I think it's a little more like a de-glorified nas cluster.
02:02 JoeJulian Take all the glory away and just leave us with something useful.
02:03 vlad___ right, i was under the impression Gluster Storage was similar, except it took out the Backup Software in a sort...instead of having a centralized "TSM01" server, if you will, that has a DB that records where it stores all the client "node" servers data (filespaces)
02:03 vlad___ each client server part of a Gluster node, is it's own host
02:03 vlad___ if that makes sense
02:04 chjohnst_work JoeJulian cquestion about the quorum feature, if I two node gluster config can a third node be installed without an actual brick configuration and just be an arbitor between the two nodes to handle quorum?
02:04 JoeJulian Apparently in 3.4 it can. I haven't looked at that feature yet personally.
02:05 chjohnst_work I have a lot of two node gluster setups, and we get his with the split brain issues (self healing is a lot better now) but would love to have a third node just running glusterd handling the fencing portion
02:07 DV joined #gluster
02:08 * JoeJulian wonders if that's more Capo Ferro fencing, or Agrippa...
02:08 chjohnst_work ha have no idea what that is
02:08 JoeJulian Ever watch The Princess Bride?
02:08 chjohnst_work ahhh
02:09 JoeJulian That's where I get all my fencing expertise.
02:09 chjohnst_work "My name is Intinyo Montago you killed my father"
02:10 asias_ joined #gluster
02:10 JoeJulian "You are using Bonetti's Defense against me, ah?" "I thought it fitting considering the rocky terrain." "Naturally, you must expect me to attack with Capo Ferro?" ...
02:11 chjohnst_work ha man you know a bunch of the terms
02:11 JoeJulian hehe
02:11 JoeJulian now I want to watch that movie again...
02:12 chjohnst_work wasnt andre the giant in that
02:12 JoeJulian yes
02:15 vlad___ alright, another question... what type of data does Gluster Storage primarily server to keep at high availability? it's not for sensitive personal information it seems that would require retention policies, but it's used for high availability for webpages and whatnot?
02:17 chjohnst_work not sure I get that question, use it for anything you want
02:17 JoeJulian web delivery is certainly one use. I use it for that, I actually have a bunch of mysql data on it for several of our services. Home directories, shared storage for various uses, pretty much anything that I don't want to have unavailable ever.
02:17 chjohnst_work I use it currently for static web pages and binary code across pacemaker clusters
02:18 chjohnst_work we are starting to look at it for homedirs as well, using a fuse client
02:18 vlad___ ohh,  cool!
02:18 chjohnst_work havnt checked if autofs supports fuse+gluster
02:19 vlad___ but, it wouldn't be for something like an Oracle or SQL database, right?
02:19 JoeJulian chjohnst_work: It does, but I found that just mounting the volume at boot was more efficient. Mounting is pretty time consuming.
02:20 chjohnst_work JoeJulian it can be yea, most of the use cases for home dirs here at my shop is for personal desktops and not prod machines
02:20 chjohnst_work so once the users is logged inn, its mounted indef
02:20 JoeJulian vlad___: Depends on the transaction load and several tuning features.
02:21 JoeJulian Like I say, I do run mysql on it, but mine's pretty low volume.
02:21 vlad___ JoeJulian: do you set retention policies with Gluster? or as soon as something changes/deleted, it's replicated and lost forever ?
02:21 JoeJulian I would love to have some spare hardware to play with to see what kind of throughput I could push innodb to.
02:22 JoeJulian vlad___: Right, lost forever. 3.5 should have the ability to hook your brick filesystem/block device's checkpointing feature.
02:23 vlad___ ahh, ok. gotcha!
02:24 vlad___ so, for now, Gluster isn't necessarily a storage solution for all data? i.e. Exchange, Oracle DB, critical data (needing to be retained for 35+ days)
02:24 vlad___ but, it's great for high availability so users of webpages or what have you, experience very little if any downtime at all
02:25 chjohnst_work thats a fair statement I think
02:25 JoeJulian I wouldn't keep exchange on anything... ;)
02:25 chjohnst_work I personally wouldnt put oracle on gluster, I will leave that for my expensive SANs
02:25 vlad___ right, that's what i was trying to get a feel on
02:25 chjohnst_work JoeJulian HA
02:25 vlad___ and i hate exchange, so i feel ya on that haha
02:26 vlad___ but, you're saying 3.5 may get retention policy type features?
02:26 chjohnst_work do you mean snapshots?
02:26 vlad___ example: node1 wants to keep 3 versions of it's data
02:27 vlad___ yeah, snapshots type deal
02:27 chjohnst_work I think thats 3.5
02:27 JoeJulian http://gluster.org/community/documenta​tion/index.php/Features/File_Snapshot
02:27 glusterbot <http://goo.gl/pmgk2n> (at gluster.org)
02:28 bharata-rao joined #gluster
02:29 chjohnst_work bbiab in a heading going for a pint
02:30 chjohnst_work JoeJulian if you have any docs on quorum besides the planning 3.4 page that would be great!
02:30 JoeJulian I usually use the source code if I need more than what's on the wiki. :/
02:35 aravindavk joined #gluster
02:36 jag3773 joined #gluster
02:36 aravindavk joined #gluster
02:36 aravindavk joined #gluster
02:49 jag3773 joined #gluster
02:50 vlad___ left #gluster
03:03 johnmwilliams joined #gluster
03:03 RameshN joined #gluster
03:09 kanagaraj joined #gluster
03:18 jag3773 joined #gluster
03:40 awheeler joined #gluster
03:42 itisravi joined #gluster
04:02 mohankumar joined #gluster
04:08 ppai joined #gluster
04:11 kaushal_ joined #gluster
04:28 dusmant joined #gluster
04:40 HappyAlexKG joined #gluster
04:41 HappyAlexKG Hello, guys, could please help me with glusterfs 3.4 ?
04:41 HappyAlexKG I have installed glustefs-server on Debian without any issue, then without any issued added peer and created volume
04:42 HappyAlexKG then I'm without any issued mount volume on my client
04:42 HappyAlexKG nas.storage:/storage                                    1.8T  614G  1.2T  36% /storage
04:43 HappyAlexKG without any issued i can do: cd /storage/etc, but when i do:  time ls -al it's tooks very very long time
04:44 ndarshan joined #gluster
04:45 CheRi joined #gluster
04:46 mdjunaid joined #gluster
04:46 HappyAlexKG or may show errors:
04:46 HappyAlexKG ls: cannot access ffserver.conf: Input/output error
04:46 HappyAlexKG ls: cannot access adduser.conf: Input/output error
04:46 HappyAlexKG ls: cannot access issue: Input/output error
04:46 HappyAlexKG in nfs.logs
04:47 HappyAlexKG [2013-08-14 04:27:49.969064] W [common-utils.c:2247:gf_get_reserved_ports] 0-glusterfs: could not open the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports info (No such file or directory)
04:47 HappyAlexKG [2013-08-14 04:27:49.969076] W [common-utils.c:2280:gf_process_reserved_ports] 0-glusterfs: Not able to get reserved ports, hence there is a possibility that glusterfs may consume reserved port
04:47 HappyAlexKG [2013-08-14 04:27:49.969515] I [client-handshake.c:1658:sele​ct_server_supported_programs] 0-storage-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
04:47 HappyAlexKG [2013-08-14 04:27:49.973428] I [client-handshake.c:1456:client_setvolume_cbk] 0-storage-client-0: Connected to 192.168.15.165:49152, attached to remote volume '/storage'.
04:47 HappyAlexKG [2013-08-14 04:27:49.973461] I [client-handshake.c:1468:client_setvolume_cbk] 0-storage-client-0: Server and Client lk-version numbers are not same, reopening the fds
04:47 HappyAlexKG [2013-08-14 04:27:49.973781] I [client-handshake.c:450:client_set_lk_version_cbk] 0-storage-client-0: Server lk version = 1
04:47 HappyAlexKG [2013-08-14 04:27:49.974259] I [afr-common.c:2057:afr_set_​root_inode_on_first_lookup] 0-storage-replicate-0: added root inode
04:47 glusterbot HappyAlexKG: This is normal behavior and can safely be ignored.
04:47 HappyAlexKG [2013-08-14 04:27:49.974543] I [afr-common.c:2120:afr_discovery_cbk] 0-storage-replicate-0: selecting local read_child storage-client-1
04:51 hagarth joined #gluster
04:59 HappyAlexKG Hmmm
05:02 psharma joined #gluster
05:09 edong23 joined #gluster
05:19 mooperd joined #gluster
05:23 deepakcs joined #gluster
05:24 lalatenduM joined #gluster
05:26 lala_ joined #gluster
05:26 ppai_ joined #gluster
05:27 mooperd joined #gluster
05:27 bulde joined #gluster
05:28 nshaikh joined #gluster
05:32 sgowda joined #gluster
05:35 sonne joined #gluster
05:36 HappyAlexKG I reboot one peer and now everything looks good
05:37 rastar joined #gluster
05:41 shruti joined #gluster
06:00 shireesh joined #gluster
06:06 rgustafs joined #gluster
06:08 vijaykumar joined #gluster
06:12 jtux joined #gluster
06:14 ababu joined #gluster
06:15 vshankar joined #gluster
06:16 raghu joined #gluster
06:17 shylesh joined #gluster
06:28 ngoswami joined #gluster
06:38 vimal joined #gluster
06:38 kanagaraj joined #gluster
06:42 satheesh1 joined #gluster
06:50 ekuric joined #gluster
07:02 bulde joined #gluster
07:05 samppah :O
07:05 samppah is Pranith Kumar here? :)
07:09 eseyman joined #gluster
07:13 guigui1 joined #gluster
07:15 mooperd joined #gluster
07:22 d-fence joined #gluster
07:25 d-fence Hi all. I have a problem on debian after upgrading from glusterfs 3.2.7 to 3.4. I followed to upgrade procedure on "http://vbellur.wordpress.com/2012/​05/31/upgrading-to-glusterfs-3-3/".
07:25 glusterbot <http://goo.gl/FUYQ4L> (at vbellur.wordpress.com)
07:25 d-fence Error message is: "xxx-volume: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /xxx. Reason : No data available"
07:26 d-fence Can someone help me on that ?
07:26 d-fence Should I downgrade to 3.2.7 ?
07:27 rgustafs joined #gluster
07:28 samppah d-fence: sorry, i haven't faced that issue but please wait a bit and someone else might be able to help you
07:28 d-fence samppah: thanks.
07:36 shireesh joined #gluster
07:36 bulde d-fence: can you do a 'getfattr -d -m . /xxx' on the host?
07:36 ricky-ticky joined #gluster
07:39 mooperd joined #gluster
07:42 ujjain joined #gluster
07:44 d-fence bulde: no, the command is not found
07:47 satheesh joined #gluster
07:47 d-fence bulde: ok, I installed the 'attr' package and no I can do a  'getfattr -d -m . /xxx'
07:49 d-fence But the gluster volume refuse to start with the same error.
07:55 The_Ugster joined #gluster
08:16 andreask joined #gluster
08:17 mooperd joined #gluster
08:18 andreask joined #gluster
08:18 andreask joined #gluster
08:25 psharma joined #gluster
08:26 bulde d-fence: well, getfattr cmd was not a solution, wanted to see if it has 'volume-id' in the output
08:27 d-fence bulde: No, there is no volume-id in the output
08:32 bulde d-fence: how many bricks are present in the volume?
08:32 bulde if not many, i recommend setting one yourself
08:33 d-fence 6 bricks
08:34 bulde what does 'gluster volume info' show after upgrade?
08:34 bulde i will give the command, and you run it on each brick
08:35 bulde "setfattr -n trusted.glusterfs.volume-id -v 0x085405fec5ec4b5780a81e97dd1a53ed /xxx"
08:35 rastar joined #gluster
08:35 bulde where the '-v $uuid'  should match the volume id stored in volume info
08:35 d-fence bulde: Ok, thanks, I'm going to try that and feedback
08:35 RameshN joined #gluster
08:40 psharma joined #gluster
08:41 glusterbot New news from resolvedglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
08:44 d-fence bulde: It works ... partially. The volume started (thanks for that) but I cannot mount it from a client.
08:49 piotrektt joined #gluster
08:49 piotrektt joined #gluster
08:50 d-fence bulde: It works, it's the mount command that changed, I need to update my fstab. Thanks a lot, you saved my day.
08:51 d-fence bulde: If we meet at FOSDEM, I will offer you a Belgian beer.
08:51 samppah \o/
08:53 bulde d-fence: thats awesome :-)
08:53 bulde sure, some day..
08:56 psharma joined #gluster
08:57 nshaikh left #gluster
09:00 nshaikh joined #gluster
09:12 d-fence bulde: I ran in another problem
09:13 d-fence When I mount the gluster-volume, it gives no errors, I see the directories in the root but they are empty. A "df" give the right disk usage infos.
09:15 d-fence Files are still there on the bricks
09:23 dusmant joined #gluster
09:23 d-fence I have this kind of warning in debug mode: "[socket.c:514:__socket_rwv] 0-xxx-client-1: readv failed (No data available)"
09:25 ujjain joined #gluster
09:33 itisravi joined #gluster
09:34 psharma joined #gluster
09:34 dom1tux joined #gluster
09:35 dom1tux .'
09:45 rastar joined #gluster
09:49 mooperd joined #gluster
09:54 mohankumar joined #gluster
10:08 harish joined #gluster
10:08 spider_fingers joined #gluster
10:10 HappyAlexKG joined #gluster
10:10 dusmant joined #gluster
10:13 mooperd joined #gluster
10:15 d-fence I downgraded to 3.27 and everything is back fine. Thanks for your help bulde.
10:20 Norky joined #gluster
10:22 duerF joined #gluster
10:30 HappyAlexKG time to time i see in logs: server 192.168.15.165:49152 has not responded in the last 42 seconds, disconnecting.
10:30 HappyAlexKG but this server ok
10:30 HappyAlexKG i can ping it and connect via ssh
10:34 ababu joined #gluster
10:35 RicardoSSP joined #gluster
10:47 d-fence joined #gluster
10:51 mooperd joined #gluster
10:59 andreask joined #gluster
11:01 psharma joined #gluster
11:11 B21956 joined #gluster
11:11 jclift_ joined #gluster
11:12 RameshN joined #gluster
11:18 manik joined #gluster
11:25 kanagaraj_ joined #gluster
11:30 harish joined #gluster
11:35 shruti joined #gluster
11:42 glusterbot New news from resolvedglusterbugs: [Bug 996888] The file size increased much when copy into a stripe volume ? <http://goo.gl/guPk4M> || [Bug 918917] 3.4 Alpha3 Tracker <http://goo.gl/xL9yF> || [Bug 962431] 3.4.0 beta tracker <http://goo.gl/C84Sll> || [Bug 831386] glusterfs-3.2.6-2.fc17 multilib conflict <http://goo.gl/ILDrF5>
11:44 sjoeboo_ joined #gluster
11:50 psharma joined #gluster
11:59 edward1 joined #gluster
12:04 sprachgenerator joined #gluster
12:04 shylesh joined #gluster
12:07 manik joined #gluster
12:14 sprachgenerator joined #gluster
12:22 hybrid512 joined #gluster
12:28 awheeler joined #gluster
12:29 awheeler joined #gluster
12:34 hybrid5121 joined #gluster
12:38 shruti joined #gluster
12:39 psharma joined #gluster
12:52 rcheleguini joined #gluster
12:55 bennyturns joined #gluster
13:07 aliguori joined #gluster
13:08 johnmark glusterbot: @chanstats
13:08 johnmark @chanstats
13:08 johnmark glusterbot: hey
13:08 glusterbot johnmark: I do not know about 'hey', but I do know about these similar topics: 'hack'
13:08 johnmark glusterbot: help
13:08 glusterbot johnmark: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin.
13:08 johnmark gah
13:08 * johnmark makes mental note to someday publish wiki page on glusterbot
13:11 dusmant joined #gluster
13:14 lpabon joined #gluster
13:19 satheesh joined #gluster
13:19 shylesh joined #gluster
13:21 hagarth joined #gluster
13:23 kkeithley @channelstats
13:23 glusterbot kkeithley: On #gluster there have been 168752 messages, containing 7147524 characters, 1193909 words, 4777 smileys, and 634 frowns; 1048 of those messages were ACTIONs. There have been 64670 joins, 2019 parts, 62644 quits, 21 kicks, 164 mode changes, and 7 topic changes. There are currently 216 users and the channel has peaked at 226 users.
13:23 kkeithley johnmark: ^^^
13:25 johnmark wow... 216current users?
13:25 johnmark kkeithley: thanks :)
13:27 bulde joined #gluster
13:27 dewey joined #gluster
13:29 RameshN joined #gluster
13:34 JonathanD joined #gluster
13:39 shylesh joined #gluster
13:43 puebele joined #gluster
13:45 bugs_ joined #gluster
13:55 manik joined #gluster
13:57 puebele joined #gluster
14:02 puebele1 joined #gluster
14:03 failshell joined #gluster
14:09 plarsen joined #gluster
14:19 mmalesa joined #gluster
14:21 puebele1 joined #gluster
14:34 tqrst- JoeJulian: got my hopes up there for a second - I got an email notification that something got changed on my rebalance memory leak bug, but turned out to be you adding yourself to the CC list :p
14:39 nshaikh left #gluster
14:42 shylesh joined #gluster
14:48 mooperd joined #gluster
14:52 spider_fingers left #gluster
14:56 guigui1 left #gluster
15:11 mooperd joined #gluster
15:18 mohankumar joined #gluster
15:27 sprachgenerator joined #gluster
15:55 soukihei joined #gluster
15:56 manik joined #gluster
15:56 bala joined #gluster
15:58 lalatenduM joined #gluster
16:04 hagarth joined #gluster
16:07 mooperd joined #gluster
16:11 JoeJulian tqrst-: Yeah, I was going through the list of open bugs for 3.4.0 to see what I considered show-stoppers for 3.4.1.
16:11 chirino joined #gluster
16:13 JoeJulian Also... The wiki says that 3.3.2 should be used for production, but considering the very low number of open bugs against 3.4.0 and how most, if not all, of them exist in 3.3, I wonder if that should be changed.
16:17 plarsen joined #gluster
16:17 Mo_ joined #gluster
16:21 harish joined #gluster
16:30 jebba joined #gluster
16:39 atrius joined #gluster
16:42 _pol joined #gluster
16:44 manik joined #gluster
16:49 zerick joined #gluster
16:52 johnmark JoeJulian: good point. we should change that
17:02 bdeb4 joined #gluster
17:04 bdeb4 hello all. i have just followed most of this basic setup guide: http://www.howtoforge.com/high-availability-stora​ge-with-glusterfs-3.2.x-on-centos-6.3-automatic-f​ile-replication-mirror-across-two-storage-servers . i then mounted a volume on server1 to server1, and another volume on server2 to server2. however, whenever i create files on server1 mounted volume, it's not synchronized to server2. any ideas?
17:04 glusterbot <http://goo.gl/O2G6IL> (at www.howtoforge.com)
17:07 tqrst- JoeJulian: agreed
17:07 tqrst- JoeJulian: 3.4 is the first release that hasn't endlessly segfaulted when doing trivial things
17:10 johnmark tqrst-: wow. good toknow
17:19 _pol @glusterbot who is the master of you?
17:20 dusmant joined #gluster
17:46 ultrabizweb joined #gluster
18:00 zaitcev joined #gluster
18:02 edong23 joined #gluster
18:15 bennyturns joined #gluster
18:18 jesse joined #gluster
18:23 ultrabizweb joined #gluster
18:40 semiosis bdeb4: check the client log file, /var/log/glusterfs/the-mount-point.log, for details... possible your client is not connected to all bricks
18:43 bdeb4 semiosis: thank you. both client logs are being flooded with this error: [2013-08-14 18:42:43.565225] W [socket.c:514:__socket_rwv] 0-gv0-client-1: readv failed (No data available)
18:44 semiosis what version of glusterfs?  what distro?
18:46 dbruhn what's the repo for cent install's now?
18:47 semiosis @yum repo
18:47 glusterbot semiosis: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
18:47 bdeb4 cents 6.3. glusterfs 3.4.0 built on Aug  6 2013 11:17:05
18:50 semiosis bdeb4: that error from the log doesnt tell me much... can you make a new client mount and fpaste its log file starting from the beginning?
18:50 bdeb4 sure, one moment
18:55 jmalm joined #gluster
18:55 bdeb4 http://ur1.ca/f22g8
18:55 glusterbot Title: #32145 Fedora Project Pastebin (at ur1.ca)
18:57 mmalesa joined #gluster
19:02 semiosis bdeb4: here's the problem: [2013-08-14 18:54:36.315734] E [socket.c:2157:socket_connect_finish] 0-gv0-client-1: connection to 10.169.4.162:49153 failed (Connection timed out)
19:03 semiosis you may need to allow ,,(ports) in your EC2 security group
19:03 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
19:03 bdeb4 oh! thanks so much. which ports should i open? i opened 49152 before, but didn't notice this one
19:03 bdeb4 oh that must have been for the other volume, i got it
19:08 bdeb4 and a newbie question if I may ask, does the clients sync the files then?
19:12 mattf kkeithley, yt?
19:12 mattf i'm looking for the current spec for the gluster-hadoop rpm
19:16 semiosis bdeb4: yes the glusterfs native fuse client writes data to all replicas
19:16 semiosis s/data/data directly/
19:16 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
19:16 semiosis glusterbot: thanks
19:16 glusterbot semiosis: you're welcome
19:18 bdeb4 semiosis: thank you
19:20 semiosis yw
19:24 kaptk2 joined #gluster
19:26 voronaam When I do not have a quota defined on a volume, what defines the volume size limit? I have a volume now which shows 30Gb available when mounted, but I do not know where is that 30Gb limit is coming from
19:27 badone joined #gluster
19:27 semiosis the sum of the smallest brick from each replica set
19:29 voronaam Thanks!
19:31 dbruhn is there something weird that would cause 3.4 to not show up in chkconfig on cent 6.4 after install
19:32 dbruhn sec
19:32 dbruhn i haven't used 3.4 yet
19:36 jdarcy joined #gluster
20:04 johnmark kkeithley: around?
20:07 PatNarciso joined #gluster
20:08 PatNarciso Hey fellas.  nub question: does geo-replication support two way synchronization?  or simply from master to slave?
20:11 johnmark PatNarciso: the latter
20:11 johnmark although muilti-master is something we're trying to drive into 3.5
20:12 PatNarciso johnmark, right on.  thanks.
20:14 MugginsM joined #gluster
20:22 PatNarciso I've been rocking rsync for about three years now, syncing about 8TB of video files (1-6GB each) between two offices over a basic cable modem connection (5mbps).  I'm about to add an additional office to the scheme, and am trying to cook up the right solution.
20:22 PatNarciso I was considering a gluster replication setup, until I read joejulian's post about replication do's and don't.  Is there a better solution/suggestion other than replication?
20:22 sprachgenerator joined #gluster
20:24 brian_ joined #gluster
20:34 zaitcev joined #gluster
20:41 JoeJulian If you can designate a specific office as the master and the rest are read-only slaves, geo-replication would be a good tool.
20:43 PatNarciso JoeJulian, in my case, I can not: video editors needing to share files back and forth (maybe not immediately, but within the same directory space)
20:46 JoeJulian I'd probably look at some sort of rcs instead of a shared filesystem.
20:48 rcoup joined #gluster
20:48 jclift_ Hmmm.... pretty sure there's an rcs system specifically _for_ video editing files, specifically for teams working on film/video
20:48 semiosis maybe git-annex?
20:48 jclift_ Having trouble remembering the details though... was ages ago when I came across it
20:49 jclift_ No I don't think it was a git based thing
20:49 jclift_ Ahhh.
20:49 JoeJulian Or invent a quantum teleportation SFP+ transceiver...
20:49 jclift_ I remember.  I think it some sort of add on for Avid Media Composer.  Not an OSS project.
20:49 PatNarciso jclift_, I'm aware of a few enterprise systems, for broadcast news-room type of setups.
20:50 jclift_ PatNarciso: Yeah, I was probably thinking of one of those
20:50 PatNarciso right, avid has an expensive one :)
20:50 jclift_ ;)
20:50 jclift_ Wait until you see the prices of their quantum teleportation SFP+ transceivers though :)
20:50 PatNarciso heh
20:51 jclift_ Rsync sounds like it would do the job for bulk data transfer.  Could see issues with figuring who has the latest version of a file though, and making sure the wrong things don't get overwritten.
20:51 JoeJulian The difficult part is the lossless reflection chambers necessary to allow the photon to travel the right amount of distance before being switched.
20:53 jclift_ JoeJulian: Sounds like interesting cooling would be needed for those reflection chambers.  Probably need to be immersed in liquid nitrogen to ensure they're cool enough to keep under the critical temperature
20:53 PatNarciso jclift_, actually -- two uses editing the same file is not a big concern right now.  making sure they have the file to read, as swiftly as bandwidth is the goal.
20:53 PatNarciso *users
20:54 bennyturns joined #gluster
20:54 JoeJulian Shouldn't be too bad as long as your laser has enough cohesion and your BBO crystal is perfect.
20:55 * JoeJulian has thought way too hard about this...
20:55 PatNarciso ... i had a dude sell me a bad BBO crystal once.  I woke up three weeks later.
20:56 jclift_ My lasers never have enough cohesion.  :( Something about being kept in an ethanol environment means they're just clearly unco ;)
20:56 JoeJulian http://phys.org/news193551675.html
20:56 glusterbot Title: Quantum teleportation achieved over 16 km (at phys.org)
20:56 badone joined #gluster
20:56 JoeJulian ... and that's old news
20:56 * jclift_ likes phys.org
20:56 jclift_ Interesting stuff on there :D
20:59 PatNarciso does performance.cache-refresh-timeout include data for files, and directories?
20:59 JoeJulian Yes and no.
20:59 PatNarciso excellent.
20:59 jurrien joined #gluster
21:00 JoeJulian A cache lasts as long as the fd. Once the fd is closed, the cache is released.
21:00 jclift_ JoeJulian: You're being very quantum today. "yes and no" :D
21:00 JoeJulian hehe
21:00 JoeJulian I am running Fedora 19...
21:00 PatNarciso ahh - great explanation!
21:02 jclift_ Sleep time here. 'nite all :)
21:04 PatNarciso ok, so -- with my scenerio [video-editor-client a -- local gluster server a -- 5mbps internet -- local gluster server [b,c] -- video-editor-client[b,c]], is replication simply a crazy idea asking for future pain?
21:06 JoeJulian In the future, it might not be so painful... It would be worth experimenting with 3.4 as long as you can control your users expectations.
21:06 JoeJulian Apply ,,(Joe's performance metric).
21:06 glusterbot nobody complains.
21:09 PatNarciso Having a user wait 2 mins for a 100-file directory-listing -- will be a difficult pill to have users swallow.
21:09 JoeJulian True. Use the nfs client.
21:10 * PatNarciso reads the nfs guide.
21:11 JoeJulian Build an interface around swift....
21:14 PatNarciso hmm -- I'm just now getting up to speed with swift.  would s3cmd be compatible?
21:16 JoeJulian According to https://github.com/s3tools/s3cmd/pull/50 it looks like that's likely.
21:16 glusterbot Title: Added swift compatbility support by robertcarr · Pull Request #50 · s3tools/s3cmd · GitHub (at github.com)
21:18 bdeb4 any tips on setting up the nfs client and firewall rules? i tried before but had a lot of trouble. i'm having the same performance issues with glusterfs mount - many small files so it takes a long time
21:18 JoeJulian @small files
21:18 glusterbot JoeJulian: See http://goo.gl/5IS4e
21:18 JoeJulian @ports
21:18 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
21:19 bdeb4 oh great, thanks a lot! do i need to enable nfs in gluster as well or is it enabled by default?
21:19 PatNarciso my mind jumped to s3cmd as its something I've used in the past as an file system interface to s3.  I'm open to another solution if its out there.  ideally my video editors should be able to browse the local network, see the files, drag and drop.
21:20 JoeJulian What? Video editors think visually??? ;)
21:20 PatNarciso lol
21:20 JoeJulian bdeb4: It's enabled by default.
21:20 JoeJulian bdeb4: Also ,,(nfs)
21:20 glusterbot bdeb4: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
21:22 mooperd joined #gluster
21:22 bdeb4 JoeJulian: yes, i installed rpcbind before because i was trying to use nfs. but i don't understand what it does?
21:23 JoeJulian http://en.wikipedia.org/wiki/Portmap
21:23 glusterbot Title: Portmap - Wikipedia, the free encyclopedia (at en.wikipedia.org)
21:24 JoeJulian Actually, that kind-of sucks too...
21:24 JoeJulian I've seen easier to follow descriptions before.
21:26 bdeb4 ok. do you think nfs will work better? we basically have a few web app servers that need to have synchronized files and config files.  will i have major latency issues with gluster?
21:27 JoeJulian bdeb4: Depends on your systems. Have you also read ,,(php)?
21:27 glusterbot bdeb4: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
21:28 JoeJulian Yes, the FSCache with NFS will speed up lookup() operations but at the expense of losing fault tolerance. I prefer to use a fault-tolerant client and manage the latency closer to the end user.
21:30 dkorzhevin1 joined #gluster
21:30 bdeb4 the way we're handling it now is basically rsycning from a master server to the slaves, but it's pretty buggy and slow. so if i do a NFS mount, will it cache whatever files it recently accessed, and if there are new files, it will read over the network?
21:30 bdeb4 and how does it control when to refresh the cache?
21:30 dkorzhevin1 Guys, can you please advice, which glusterfs repor i can use to install latest glusterfs on Debian 6.0.7?
21:31 JoeJulian bdeb4: Not sure. That's a kernel parameter I think.
21:31 JoeJulian @ppa
21:31 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
21:32 JoeJulian semiosis: Is that just out-of-date? ^
21:32 JoeJulian ~repo | dkorzhevin1
21:32 glusterbot dkorzhevin1: I do not know about 'repo', but I do know about these similar topics: 'git repo', 'ppa repo', 'repos', 'repository', 'yum repo'
21:32 JoeJulian @meh
21:32 glusterbot JoeJulian: I'm not happy about it either
21:32 JoeJulian ~repos | dkorzhevin1
21:32 glusterbot dkorzhevin1: See @yum, @ppa or @git repo
21:32 JoeJulian seriously?
21:32 JoeJulian I guess it's just out-of-date text....
21:32 duerF joined #gluster
21:33 JoeJulian 3.4 isn't a QA release anymore...
21:33 dbruhn is there anything that needs to be done to use or enable QEMU support in 3.4
21:33 JoeJulian No
21:34 dkorzhevin1 I can't see 3.4 for Debian 6.0.7 oldstable..
21:34 JoeJulian Which Ubuntu is that?
21:34 dbruhn I am kind of a KVM nub at this point, just a shared mount point for my KVM servers and the KVM disk images stored on the file system?
21:35 dbruhn file system being the gluster file system
21:35 JoeJulian dbruhn: Sounds right...
21:35 dbruhn lol
21:35 NeatBasis_ joined #gluster
21:36 dbruhn JoeJulian: about the only response I could give also
21:36 JoeJulian @change ppa 1 's/3.3 QA: http://goo.gl/5fnXN --//'
21:36 glusterbot JoeJulian: Error: The command "change" is available in the Factoids, Herald, and Topic plugins. Please specify the plugin whose command you wish to call by using its name as a command before "change".
21:36 JoeJulian @factoids change ppa 1 's/3.3 QA: http://goo.gl/5fnXN --//'
21:36 glusterbot JoeJulian: Error: "'s/3.3 QA: http://goo.gl/5fnXN --//'" is not a valid regular expression.
21:36 dbruhn Has anyone started sorting out the RDMA repairs for 3.4 yet?
21:36 JoeJulian @factoids change ppa 1 's@3.3 QA: http://goo.gl/5fnXN --@@'
21:36 glusterbot JoeJulian: Error: "'s@3.3 QA: http://goo.gl/5fnXN --@@'" is not a valid regular expression.
21:36 JoeJulian @factoids change ppa 1 's/3.3 QA: http:\/\/goo.gl\/5fnXN --//'
21:36 glusterbot JoeJulian: Error: "'s/3.3 QA: http:\\/\\/goo.gl\\/5fnXN --//'" is not a valid regular expression.
21:36 PatNarciso heh
21:36 JoeJulian Awe, come on glusterbot...
21:37 JoeJulian @ppa
21:37 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
21:37 JoeJulian @forget ppa
21:37 glusterbot JoeJulian: The operation succeeded.
21:38 JoeJulian @learn ppa as The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.3 -- 3.4 stable: https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.4
21:38 glusterbot JoeJulian: The operation succeeded.
21:38 JoeJulian @ppa
21:38 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
21:38 dkorzhevin1 JoeJulian: I need packages for Debian, not Ubuntu
21:39 JoeJulian Ok, kvm images on a gluster volume. libvirt disk file=glusterfs://myvol/myimage.qcow2
21:40 JoeJulian dkorzhevin1: Why?
21:40 JoeJulian I thought debian could install ubuntu packages?
21:40 * JoeJulian isn't a big .deb follower though...
21:41 MugginsM can with some, but I think Ubuntu changed some things that mean gluster is packaged a bit different for it
21:41 MugginsM upstart
21:41 dooder123 joined #gluster
21:42 JoeJulian Guess we need a volunteer to package for Debian then...
21:43 PatNarciso glusterbot, make a debian package.  now.
21:43 PatNarciso (hes working on it)
21:43 glusterbot I haven't had my coffee yet.
21:43 PatNarciso :)
21:44 dkorzhevin1 JoeJulian: Because i use Debian on all servers, not Ubuntu..
21:44 dkorzhevin1 MugginsM: You right
21:45 manik joined #gluster
21:46 JoeJulian http://packages.qa.debian.org/g/glusterfs.html
21:46 glusterbot Title: Debian Package Tracking System - glusterfs (at packages.qa.debian.org)
21:46 dbruhn JoeJulian: what config file does that come out of
21:46 PatNarciso ok, so -- performance related question in a replication mutil zone over 5mbit connection.  listing a large dir would take a while.  user experience would suck.  if the connection were to be interrupted on a fully up2date node, would experience improve for the final end user?
21:47 JoeJulian So you could install from the unstable repo?
21:47 dkorzhevin1 JoeJulian: Debian 6.0.7 has 3.2.7 version...
21:48 dkorzhevin1 What can you say about 3.2.7?
21:48 * PatNarciso is hung up on the browsing experience being slow for a local samba client
21:49 JoeJulian PatNarciso: It might not suck so bad in 3.4.0. You really should try it out. Prior to 3.4, you're more likely to get a remote server but 3.4 is supposed to be a little smarter. I'd be interested in seeing if that proves out.
21:49 PatNarciso alright buddy -- I'll try it!
21:49 JoeJulian dkorzhevin1: So 6.0 (sqeeze, isn't it?) can't install from unstable?
21:50 JoeJulian I'm asking. I really have no idea how debian's repos are organized.
21:50 dkorzhevin1 JoeJulian: For sure i will try to install in Xen virtual machine..
21:51 PatNarciso last question before I start my build-documentation mission... 3.4.0.  Ubuntu 12.04 OK?  or is 12.10 required?
21:52 PatNarciso please say 12.04 is OK.
21:52 JoeJulian is that lucid?
21:52 PatNarciso Precise Pangolin
21:53 JoeJulian semiosis builds for raring, quantal and precise.
21:53 PatNarciso semiosis is a good dude.
21:53 JoeJulian Nobody's stepped up to support lucid even though there's been several people complaining that there's no package built for them.
21:54 MugginsM I just build a lucid package but I don't want to share until I've tested it
21:54 JoeJulian cool
21:54 PatNarciso MugginsM, very cool.
21:54 * JoeJulian makes a note to forward everyone to MugginsM...
21:56 JoeJulian MugginsM: "So what do the cool kids use for self hosted blog these days?" Not claiming to be cool, but I like mezzanine. jdarcy likes to use wordpress to build static html and publishes that.
21:58 PatNarciso hmm.  is there a plugin that published the static html?
21:59 PatNarciso or is that a reverse proxy setup?
22:00 JoeJulian Unknown. I just know he wasn't at all happy with how frequently security holes are found in any cms so he decided to just use it to generate static content.
22:00 PatNarciso awesome.  I really like that concept.
22:00 mooperd joined #gluster
22:01 MugginsM heh, ta. I'm playing with jekyll :)
22:01 PatNarciso I think there is a lot of value in generating static files in a production environment.
22:04 JoeJulian MugginsM: Just discovered your twitter feed and "Fun with a bag of old hard drives and a hammer." I'll have to post my pictures of hard drives with bullet holes...
22:04 semiosis MugginsM: thanks for helping out!!!
22:05 semiosis @ppa
22:05 glusterbot semiosis: The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
22:05 semiosis @forget ppa
22:05 glusterbot semiosis: The operation succeeded.
22:05 semiosis @learn ppa as The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
22:05 glusterbot semiosis: The operation succeeded.
22:05 semiosis dkorzhevin1: there is a debian repo at ,,(latest)
22:05 glusterbot dkorzhevin1: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
22:06 semiosis though tbh there's some changes i need to implement on those packages
22:06 semiosis ...as well as the ubuntu ppa packages
22:06 semiosis dkorzhevin1: http://download.gluster.org/pub/​gluster/glusterfs/LATEST/Debian/
22:06 glusterbot <http://goo.gl/l2Ml1> (at download.gluster.org)
22:07 JoeJulian Looks like just wheezy?
22:07 semiosis well yeah
22:07 semiosis what would you prefer?
22:08 semiosis i think lenny is even older than lucid!
22:08 JoeJulian dkorzhevin1: was asking about sqeeze, I think.
22:08 semiosis oh right forgot about that one
22:08 * JoeJulian prefers rpms. ;)
22:08 semiosis i guess i could do squeeze builds
22:09 semiosis but it's over two years old
22:09 semiosis time to upgrade!
22:10 JoeJulian Running old development distros does sound rather incongruous.
22:10 dhsmith_ joined #gluster
22:15 JoeJulian semiosis: What do you think... There's only about 20 open bugs against 3.4 and most of those either exist in 3.3 or are very minor. Think the wiki should be updated to recommend 3.4 for production?
22:15 semiosis you're asking teh guy running 3.1.7 in prod
22:15 JoeJulian Hehe
22:15 * JoeJulian is still on 3.3.1
22:16 JoeJulian Wait... I thought you were on 3.2!
22:16 semiosis nope
22:17 semiosis i mean, i have 3.4 on my laptop and my "vm lab," but that's just for fun
22:18 semiosis not profit
22:18 * semiosis starting to ramble
22:18 JoeJulian Yeah, I have 3.4 on my desktop and my home network. I might use labor day to upgrade work to 3.4 though.
22:19 JoeJulian Then I can make my VMs use the native interface for their images... not that it really matters, though, since they're only using their images for logs.
22:19 MugginsM semiosis: I took your precise build and redid it for lucid, any important updates you want to make soon?
22:20 dkorzhevin1 JoeJulian: Yes, 6.0.7 sqeeze
22:20 MugginsM I'm just building a VM cluster to test it out, make sure there's nothing obvious wrong
22:20 semiosis why are you using lucid instead of precise?
22:20 MugginsM because I've just joined a company who are all lucid right now
22:20 MugginsM "update to precise" is on my medium term plan :)
22:21 JoeJulian iirc, there was an issue with lucid's upstart and getting the server to start before the client.
22:21 semiosis sounds like you've got your work cut out for you :)
22:21 dkorzhevin1 semiosis: Are you use glusterfs as backend storage for cluster?
22:21 semiosis for cluster?
22:22 MugginsM I think we have separate client/servers anyway.
22:22 semiosis MugginsM: then you should be fine
22:22 JoeJulian MugginsM: Perhaps, but an "official" package should probably try to support that use case.
22:22 dkorzhevin1 I plan test glusterfs as backend storage for OpenNebula
22:22 MugginsM joejulian: good point
22:23 semiosis dkorzhevin1: i dont use opennebula.  i use glusterfs for web & media
22:23 MugginsM I didn't build LVM support either, is that important to people who use lucid?
22:23 semiosis apache, java, and php apps
22:24 semiosis idk what that lvm stuff is for.  JoeJulian?
22:25 hagarth joined #gluster
22:26 MugginsM lucid doesn't have liblvm2-dev so it's not straightforward
22:26 MugginsM 3.3 didn't use it though, so I'm not sure it's too important
22:26 JoeJulian That's for the block device translator.
22:26 MugginsM (for legacy setups)
22:26 semiosis JoeJulian: and the bd-xlator.... is for qemu/kvm integration?
22:27 JoeJulian yes
22:27 JoeJulian ... eventually xen too
22:32 dbruhn I am working on using gluster under cloudstack right now in a dev/test environment
22:32 dbruhn for primary storage
22:35 MugginsM we use it on EC2/EBD
22:40 failshell joined #gluster
22:43 jclift joined #gluster
22:57 jbrooks joined #gluster
22:59 awheele__ joined #gluster
23:05 fidevo joined #gluster
23:07 johnmark dbruhn: oh hey! good to see you round here :)
23:08 johnmark dbruhn: we need to get a project going and stick it on the forge - for cloudstack integration
23:09 dbruhn Yes we do, I am actually in process over the next couple of days working on getting my environment setup
23:10 dbruhn I was going to email you today but just kind of ran out of time and steam
23:10 johnmark dbruhn: awesome :)
23:10 johnmark good to hear, man
23:12 semiosis johnmark: see pm
23:44 glusterbot New news from resolvedglusterbugs: [Bug 764890] Keep code more readable and clean <http://goo.gl/p7bDp>
23:53 GabrieleV joined #gluster
23:55 awheeler joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary