Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 vpshastry joined #gluster
00:16 primechuck joined #gluster
00:25 diegows joined #gluster
00:46 hchiramm__ joined #gluster
00:49 semiosis @ports
00:49 glusterbot semiosis: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
00:56 sprachgenerator joined #gluster
01:04 yinyin_ joined #gluster
01:06 nightwalk joined #gluster
01:09 tokik joined #gluster
01:12 jag3773 joined #gluster
01:17 gmcwhistler joined #gluster
01:18 bala joined #gluster
01:19 deeville joined #gluster
01:19 deeville left #gluster
01:20 deeville joined #gluster
01:22 deeville joined #gluster
01:28 askb_ joined #gluster
01:29 askb_ joined #gluster
01:32 ckannan joined #gluster
01:35 gdubreui joined #gluster
01:41 deeville joined #gluster
01:46 crazifyngers left #gluster
01:56 glusterbot New news from resolvedglusterbugs: [Bug 968432] Running glusterfs + hadoop in production platforms with reasonable privileges idioms <https://bugzilla.redhat.com/show_bug.cgi?id=968432>
01:58 sprachgenerator joined #gluster
02:04 harish__ joined #gluster
02:18 coredumb joined #gluster
02:26 glusterbot New news from resolvedglusterbugs: [Bug 1017176] Until RDMA handling is improved, we should output a warning when using RDMA volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1017176>
02:36 bharata-rao joined #gluster
02:43 yinyin joined #gluster
02:48 nightwalk joined #gluster
02:58 ThatGraemeGuy joined #gluster
02:59 searcher joined #gluster
03:01 nightwalk joined #gluster
03:07 shubhendu joined #gluster
03:09 dusmant joined #gluster
03:29 davinder joined #gluster
03:33 mattapperson joined #gluster
03:36 RameshN joined #gluster
03:39 saurabh joined #gluster
03:42 bala joined #gluster
03:45 gmcwhistler joined #gluster
03:49 itisravi joined #gluster
03:54 aravindavk joined #gluster
03:59 yinyin joined #gluster
04:08 hagarth joined #gluster
04:15 kokoev joined #gluster
04:17 exedore6 joined #gluster
04:20 ckannan joined #gluster
04:21 ndarshan joined #gluster
04:27 kdhananjay joined #gluster
04:36 prasanth_ joined #gluster
04:37 kanagaraj joined #gluster
04:41 sks joined #gluster
04:44 ravindran1 joined #gluster
04:44 nishanth joined #gluster
04:45 semiosis finally upgraded from 3.1.7 to 3.4.2 \o/
04:45 semiosis feels good
04:45 semiosis didnt even "upgrade" -- built new instances & moved the brick drives (ebs) over to the new servers
04:47 samppah huh
04:47 samppah nice! everything went well?
04:48 atinm joined #gluster
05:01 yinyin joined #gluster
05:01 semiosis samppah: yes, very well
05:02 semiosis so far :)
05:02 semiosis still tidying up
05:02 lalatenduM joined #gluster
05:03 benjamin_____ joined #gluster
05:11 dusmant joined #gluster
05:11 deepakcs joined #gluster
05:18 kdhananjay joined #gluster
05:24 atinm joined #gluster
05:24 nshaikh joined #gluster
05:25 JoeJulian semiosis: Whew... glad to hear it.
05:27 hagarth joined #gluster
05:33 ngoswami joined #gluster
05:33 rjoseph joined #gluster
05:34 spandit joined #gluster
05:35 kanagaraj_ joined #gluster
05:36 lalatenduM joined #gluster
05:49 kdhananjay joined #gluster
05:50 dusmant joined #gluster
05:52 vkoppad joined #gluster
05:52 RameshN joined #gluster
05:58 vpshastry joined #gluster
05:58 raghu joined #gluster
05:59 atinm joined #gluster
06:00 coredumb do you folks have old nfs clients pointing to your gluster nfs ?
06:01 coredumb like _OLD_ clients
06:01 yinyin joined #gluster
06:03 vpshastry1 joined #gluster
06:06 DV joined #gluster
06:11 hagarth joined #gluster
06:19 shylesh joined #gluster
06:26 Philambdo joined #gluster
06:27 yinyin joined #gluster
06:37 jtux joined #gluster
06:49 dubey joined #gluster
06:49 dubey Hello
06:49 glusterbot dubey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:51 dubey Can't i use glusterfs as nfs ?
06:51 ppai joined #gluster
06:52 vimal joined #gluster
06:52 ekuric joined #gluster
06:52 ricky-ti1 joined #gluster
06:53 dubey I am looking a solution for my website scaling purpose on EC2 instances using auto scaling. What i couldn't understood is why i need to add multiple bricks ? and If my ec2 instances are on demand what is the point to add them as bricks
07:00 ctria joined #gluster
07:01 eseyman joined #gluster
07:04 kanagaraj joined #gluster
07:04 rgustafs joined #gluster
07:06 lalatenduM joined #gluster
07:08 sks joined #gluster
07:11 dusmant joined #gluster
07:13 vpshastry joined #gluster
07:18 bala joined #gluster
07:19 keytab joined #gluster
07:25 RameshN joined #gluster
07:29 deepakcs joined #gluster
07:29 atinm joined #gluster
07:32 ravindran1 joined #gluster
07:35 sahina joined #gluster
07:48 yinyin joined #gluster
08:04 hchiramm__ joined #gluster
08:11 andreask joined #gluster
08:20 nightwalk joined #gluster
08:24 sahina joined #gluster
08:34 kanagaraj joined #gluster
08:36 cyberbootje joined #gluster
08:40 meghanam joined #gluster
08:40 meghanam_ joined #gluster
08:46 saravanakumar1 joined #gluster
08:48 dusmant joined #gluster
09:03 wgao joined #gluster
09:04 basso joined #gluster
09:14 prasanth_ joined #gluster
09:14 partner hmm any particular reason why brick logs are left out of default log rotation?
09:19 hagarth joined #gluster
09:20 partner some of that is fixed finally on 3.4-series (i'm talking about debian packages here) ie. hup for the process to release the file but plenty seem to be still not handled properly
09:20 partner need to dig in a bit deeper..
09:23 fsimonce joined #gluster
09:23 tokik joined #gluster
09:23 sks joined #gluster
09:26 kdhananjay joined #gluster
09:27 karnan joined #gluster
09:35 liquidat joined #gluster
09:48 kdhananjay joined #gluster
09:56 davinder joined #gluster
10:04 sahina joined #gluster
10:06 nightwalk joined #gluster
10:11 verdurin Again I found that our service failed to restart after the 3.4.3 update
10:11 verdurin The same thing happened with 3.4.2
10:11 sks joined #gluster
10:23 RameshN joined #gluster
10:24 qdk joined #gluster
10:28 glusterbot New news from newglusterbugs: [Bug 1084422] Metadata operations on a fd (whose file is unlinked) fails <https://bugzilla.redhat.com/show_bug.cgi?id=1084422>
10:35 ngoswami_ joined #gluster
10:36 ngoswami joined #gluster
10:40 sks joined #gluster
10:41 RameshN joined #gluster
10:41 verdurin I've reported it as bug #1084432
10:41 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1084432 unspecified, unspecified, ---, rwheeler, NEW , Service fails to restart after 3.4.3 update
10:43 harish__ joined #gluster
10:53 dusmant joined #gluster
10:58 glusterbot New news from newglusterbugs: [Bug 1084432] Service fails to restart after 3.4.3 update <https://bugzilla.redhat.com/show_bug.cgi?id=1084432>
10:58 hagarth joined #gluster
11:06 kdhananjay joined #gluster
11:09 andreask joined #gluster
11:10 karnan joined #gluster
11:33 vpshastry left #gluster
11:42 diegows joined #gluster
11:50 sahina joined #gluster
11:54 nightwalk joined #gluster
11:57 sputnik1_ joined #gluster
11:58 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <https://bugzilla.redhat.com/show_bug.cgi?id=969461>
12:04 coredumb Andyy2: so how's the tests going ?
12:09 mattapperson joined #gluster
12:10 itisravi joined #gluster
12:12 benjamin_____ joined #gluster
12:20 hagarth joined #gluster
12:24 tdasilva left #gluster
12:28 deeville joined #gluster
12:33 magicrobotmonkey joined #gluster
12:33 magicrobotmonkey how can i see the settings on a volume?
12:33 prasanth_ joined #gluster
12:33 magicrobotmonkey i.e. perfomance,cache-size
12:35 JoeJulian partner: I think that's because the devs expect you to use a cron event to use the cli to do a volume log-rotate. Many (most?) of us don't, however, and instead use logrotate's copytruncate method.
12:39 kkeithley glusterfs-3.4.3-2 RPMS for SLES11sp3 and OpenSuSE13.1 now available at http://download.gluster.org/pub/gluster/glusterfs/LATEST/
12:39 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST (at download.gluster.org)
12:45 sks joined #gluster
12:50 japuzzo joined #gluster
12:53 liquidat joined #gluster
12:54 JoeJulian magicrobotmonkey: "gluster volume info" will show settings that you've changed from the default. "gluster volume set help" will show the defaults.
12:59 glusterbot New news from newglusterbugs: [Bug 1084485] tests/bugs/bug-963678.t is crashing in Rackspace, generating core file <https://bugzilla.redhat.com/show_bug.cgi?id=1084485>
13:02 sahina joined #gluster
13:02 bennyturns joined #gluster
13:02 magicrobotmonkey ok cool, thanks JoeJulian
13:12 shyam joined #gluster
13:17 jmarley joined #gluster
13:21 B21956 joined #gluster
13:32 davinder joined #gluster
13:36 lpabon joined #gluster
13:39 davinder2 joined #gluster
13:42 lalatenduM joined #gluster
13:42 Isaacabo joined #gluster
13:42 Isaacabo good morning guys
13:45 tdasilva joined #gluster
13:50 JoeJulian ... depends on your definition... ;(
13:50 JoeJulian ;(
13:50 JoeJulian dammit...
13:50 JoeJulian ;)
13:50 JoeJulian that's the one...
13:57 Isaacabo jajaja
13:58 seapasulli joined #gluster
13:59 glusterbot New news from resolvedglusterbugs: [Bug 849630] client_t implementation <https://bugzilla.redhat.com/show_bug.cgi?id=849630>
13:59 magicrobotmonkey left #gluster
14:02 rwheeler joined #gluster
14:02 japuzzo joined #gluster
14:04 jag3773 joined #gluster
14:11 ultrabizweb joined #gluster
14:16 theron joined #gluster
14:18 theron_ joined #gluster
14:19 rpowell joined #gluster
14:21 LoudNoises joined #gluster
14:27 wushudoin joined #gluster
14:29 glusterbot New news from newglusterbugs: [Bug 1084508] read-ahead not working if open-behind is turned on <https://bugzilla.redhat.com/show_bug.cgi?id=1084508>
14:40 rwheeler joined #gluster
14:52 kaptk2 joined #gluster
15:00 sks joined #gluster
15:03 ctria joined #gluster
15:07 eseyman joined #gluster
15:08 benjamin_ joined #gluster
15:21 rpowell left #gluster
15:23 georgeh|workstat joined #gluster
15:26 daMaestro joined #gluster
15:27 tyrok_laptop2 joined #gluster
15:28 tyrok_laptop2 I have a single Gluster server set up right now which contains data and I would like to add a second Gluster server to replicate that data.  Is there a place in the docs that tells how to do this?  If so, I haven't been able to find it.
15:31 hagarth joined #gluster
15:32 _Bryan_ joined #gluster
15:44 bala joined #gluster
15:45 kkeithley tyrok_laptop2: gluster volume add-brick $volname replica2 $secondserver:$pathtobrick
15:45 kkeithley gluster volume add-brick $volname replica 2 $secondserver:$pathtobrick
15:47 Ark joined #gluster
15:48 semiosis partner: someone else asked me to cover brick logs in logrotate config for debs.  it's on my todo list but i've been really busy lately (lost a dev on my team at work & covering his load)
15:48 Guest19057 I have 4 bricks all 3TBs in size. I run disturbed replication across them. I noticed the data on the bricks is not evenly disturbed, is this because the files I am backing up are over 1TB?
15:49 semiosis Guest19057: files should be evenly distributed, not bytes
15:49 tyrok_laptop2 kkeithley: Excellent.  Thanks!
15:50 seapasulli_ joined #gluster
15:54 Guest19057 semiosis: thank you, so it is not uncommon to see some bricks with 10% more data then others?
15:55 Guest19057 Is this related to the disturbed hashing algorithm?
15:56 semiosis the algorithm works on file names.  if you have only a few files, they may not be evenly distributed.
15:56 semiosis if your files are very different in size then you could expect uneven disk usage
15:57 Guest19057 Great, Thank you. If I wanted to geo replication is this the newest guide http://www.gluster.org/community/documentation/index.php/HowTo:geo-replication ?
15:57 glusterbot Title: HowTo:geo-replication - GlusterDocumentation (at www.gluster.org)
15:57 semiosis dont know
16:01 Guest19057 semiosis: thank you for your time again
16:03 tomased joined #gluster
16:04 bala joined #gluster
16:04 semiosis yw
16:10 lmickh joined #gluster
16:12 bennyturns joined #gluster
16:15 doekia joined #gluster
16:16 doekia_ joined #gluster
16:20 ckannan joined #gluster
16:33 davinder joined #gluster
16:34 aravindavk joined #gluster
16:35 Mo__ joined #gluster
16:48 sputnik1_ joined #gluster
16:51 doekia joined #gluster
17:07 jobewan joined #gluster
17:08 zerick joined #gluster
17:23 dusmant joined #gluster
17:29 wushudoin left #gluster
17:32 diegows joined #gluster
17:42 NuxRo JoeJulian: other than the old verasion of glusterfs-libs, has the upgrade gone uneventful? I'm contemplating upgrading a 3.4.0 installation, want to know if there are gotchas
17:56 kkeithley FWIW, I respun the RPMs (3.4.3-2) so you won't encounter the glusterfs-libs issue
17:59 vpshastry joined #gluster
17:59 vpshastry left #gluster
18:00 lalatenduM joined #gluster
18:12 lkoranda joined #gluster
18:14 theron joined #gluster
18:17 lkoranda joined #gluster
18:18 vpshastry joined #gluster
18:25 mfs joined #gluster
18:25 _dist joined #gluster
18:29 hchiramm_ joined #gluster
18:40 sijis joined #gluster
18:40 sijis joined #gluster
18:42 tyrok_laptop2 Is there any way to know when, after adding a node and running the recommended find | xargs, replication is complete on all files?
18:47 JoeJulian NuxRo: Yes, surprisingly uneventful, especially when all my servers each failed to start independently after being (stupidly) automatically upgraded.
18:48 hagarth1 joined #gluster
18:49 JoeJulian kkeithley: not sure what, but something else is causing glusterd to fail to start on upgrade.
18:50 JoeJulian kkeithley: I think it might be the upgrade switch.
18:50 vpshastry left #gluster
18:51 semiosis tyrok_laptop2: you can scan the files for "dirty" xattrs
18:51 semiosis @check
18:51 glusterbot semiosis: I do not know about 'check', but I do know about these similar topics: 'check replication'
18:51 semiosis @check replication
18:51 glusterbot semiosis: http://joejulian.name/blog/quick-and-dirty-python-script-to-check-the-dirty-status-of-files-in-a-glusterfs-brick/
18:51 semiosis that
18:52 tyrok_laptop2 semiosis: Awesome.  Thanks!
18:52 JoeJulian Is the find|xargs recommended anymore?
18:52 semiosis yw
18:53 JoeJulian I think I would recommend it, but that's just because it's the closest I can get to being sure it's finished.
18:53 JoeJulian I wish there was a way to find out if glustershd has finished a "heal...full" run.
18:53 semiosis JoeJulian: you can file a bug
18:53 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:53 semiosis ;)
18:54 lalatenduM_ joined #gluster
18:54 JoeJulian I was just thinking that and trying to remember if I already had.
19:00 glusterbot New news from newglusterbugs: [Bug 1084585] There needs to be a cli method of determining if heal full has completed <https://bugzilla.redhat.com/show_bug.cgi?id=1084585>
19:01 semiosis nice
19:10 joshin joined #gluster
19:18 siel joined #gluster
19:33 bennyturns joined #gluster
19:34 Matthaeus joined #gluster
19:53 joshin joined #gluster
19:53 joshin joined #gluster
19:59 hagarth joined #gluster
20:06 Ark joined #gluster
20:07 SteveHz joined #gluster
20:07 kkeithley GlusterFS-3.5.0.beta5 RPMs for el5-7 (RHEL, CentOS, etc.) and Fedora  19-21 are now available in  the YUM repo at  http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.0beta5/
20:07 glusterbot Title: Index of /pub/gluster/glusterfs/qa-releases/3.5.0beta5 (at download.gluster.org)
20:13 pjschmit1 joined #gluster
20:17 cfeller joined #gluster
20:29 georgeh|workstat joined #gluster
20:40 SteveHz left #gluster
21:16 tdasilva left #gluster
21:25 zaitcev joined #gluster
21:31 purpleidea w00t (does this release include the doc things?)
21:41 zerick joined #gluster
21:50 qdk joined #gluster
21:54 jclift purpleidea: 3.5.0-beta5?  Don't think so yet.  hagarth mentioned earlier that the doc bits are likely to come in through next week.
21:54 purpleidea jclift: no worries, just curious. thanks!
21:54 purpleidea jclift: why don't you try running 3.5.0beta5 with puppet-gluster+vagrant and let me know in 20 if it works? :))
22:01 msvbhat joined #gluster
22:01 jclift joined #gluster
22:08 jclift purpleidea: I probably will once I get git master working in Rackspace properly.  It's pretty close to having the regression tests all passing.
22:08 jclift Now _quite_ there, but not far off.
22:09 jclift s/Now/Not/
22:09 glusterbot What jclift meant to say was: Not _quite_ there, but not far off.
22:09 purpleidea hehe sweet :)
22:09 purpleidea jclift: i started spinning up VM's in rackspace last week. I think I
22:09 purpleidea 've used $0.23 so far :P
22:09 jclift :D
22:16 JoeJulian I confirmed one bug and decided another wasn't really a bug in RS already.
22:20 jclift JoeJulian: Ahhh, the testing VM's?
22:21 jclift joetest, joetest2
22:21 jclift Cool
22:21 JoeJulian yep
22:21 jclift JoeJulian: Which bug confirmed, which bug wasn't really a bug?
22:22 JoeJulian the "wasn't really" I never filed. I thought I'd found one, but afaict it doesn't cause anything to fail. The other was the package update.
22:22 jclift Cool
22:23 jclift JoeJulian: I'm finding that Rackspace VM's aren't super reliable.  Out of every 10 or 15 that I start, one fails to boot and init fully.  I'll be able to log into it, but it doesn't do any cloud-init stuff properly, and the log shows "no such user cloud-user" or similar wording.
22:24 jclift eg I just kill it and respin it up, and generally it's all good
22:24 JoeJulian Interesting.
22:25 jclift Probably need to write some kind of handler code that verifies after spin-up that the VM is in a good state before we actually try using it.
22:25 JoeJulian I've never had that, but then I rarely used the gui.
22:25 jclift But, later.  Not right now. ;)
22:25 jclift JoeJulian: Nah, this is all using the Rackspace API.
22:25 jclift I only use the GUI to kill VM's
22:25 JoeJulian I'll tell people... ;)
22:25 jclift :)
22:26 JoeJulian Unfortunately, everyone I know is no longer involved in anything that could help find/fix that.
22:26 jclift It's ok.
22:26 JoeJulian ... hmm, I wonder if westmaas could help...
22:26 jclift It's not bothering me that much really, since I'm just spinning them up for auto-testing, and manually logging into each one afterwards to check what's passed/failed.
22:27 jclift Later on, when things are much more automated, I'll be able to think what potential causes are and probably extract useful info for RS staff
22:28 jclift Until then, don't worry abt it :)
22:28 purpleidea JoeJulian: are you our internal rackspace connection? If you can "tell people" I'd like to be able to manage the virtual machine networks through vagrant, instead of through the gui. vagrant-libvirt does this for example, but vagrant-rackspace doesn't :(
22:29 jclift purpleidea: They probably accept patches.
22:30 jclift purpleidea: Just saying, because most (not all) Rackspace "Cloud" stuff has an API for changing stuff
22:30 purpleidea I bet they do! Does the rackspace API support modifying the networks?
22:30 purpleidea right
22:30 purpleidea _thats_the_question_ :)
22:30 jclift Yep
22:30 jclift It does
22:30 purpleidea it does?
22:30 jclift Yep, looking at it right now
22:31 jclift purpleidea: So, two things I've found with the Rackspace API docs.
22:31 jclift a) If you go here, it's basically branded OpenStack docs: http://docs.rackspace.com/
22:31 glusterbot Title: Rackspace Cloud Technical Documentation (at docs.rackspace.com)
22:31 purpleidea Ah! beautiful, well, i feel like it would probably be a conflict of interest for me to go add value to rackspace products at the moment (*cough* RH/openstack) but maybe you can?
22:32 jclift Front page looks pretty, but it's all curl commands behind it
22:32 jclift purpleidea: Are you any good with Python?
22:32 purpleidea i am good with my python!
22:32 MacWinner joined #gluster
22:32 purpleidea err that came out wrong
22:33 purpleidea i am good _at_ python
22:33 jclift Heh, I was going to make a smartarse comment, but resisted (first time ever) :)
22:33 JoeJulian my python is amazing!
22:33 jclift Mines no. :(
22:33 JoeJulian lol
22:33 jclift My Python is all like a beginner
22:33 jclift It's _sooo_ tragic
22:34 purpleidea okay have a good weekend everyone! (before i have to hear about jclift and python)
22:34 jclift The womenz... they runzez
22:34 purpleidea damn
22:34 JoeJulian d'oh!
22:34 jclift purpleidea: So, Python SDK: http://developer.rackspace.com/#python
22:34 glusterbot Title: Rackspace Developer Center (at developer.rackspace.com)
22:34 purpleidea /afk
22:35 purpleidea oh cool it uses fog on ruby
22:35 purpleidea yeah someone should patch vagrant-rackspace
22:35 jclift purpleidea: The Python Rackspace API is pretty good for stuff.  Not brilliantly documented for some things, but it's pretty easy to trace through, and the guys in #rackspace here on Freenode are helpful
22:35 purpleidea vagrant-rackspace is ruby, not python
22:35 jclift purpleidea: I haven't touched the Ruby API for it
22:35 purpleidea ruby is like weird python.
22:36 jclift purpleidea: Yeah, I know.  I'm only familiar with the Python API though, so I can only say what I think that's like... with the assumption the Ruby one is prob similar
22:36 jclift I started learning Ruby first, ages ago.
22:37 jclift Then we had to actually try and build an enterprise product around it
22:37 jclift That personally turned me off Ruby ;)
22:37 jclift So, learning Python now :)
22:37 purpleidea yeah forget that. ruby is still (imo) quite immature. in any case, all these things exist in the api. someone just needs to patch vagrant-rackspace for more goodness
22:37 jclift Yep
22:38 jclift purpleidea: It's unlikely you writing a patch for vagrant-rackspace would be looked on badly by RH
22:39 jclift Since realistically, it'll help the RH projects that run in Rackspace :)
22:39 purpleidea jclift: how crazy is legal about these sorts of things?
22:39 jclift _Extremely_ relaxed
22:39 purpleidea jclift: also, it's not as if that's the #1 thing i'd want to patch :P i'd rather work on vagrant-libvirt :)
22:39 JoeJulian no, ruby is like wierd perl
22:40 droner joined #gluster
22:40 jclift purpleidea: Heh.  I'm pretty sure the vagrant-libvirt guys would be very happy to hear that.  And even more happy to get patches
22:40 purpleidea i've sent at least one so far. needs moar for v1.5.1
22:42 droner Hello, I've just a humble question: is there any news about HekaFS?
22:42 purpleidea droner: yeah it's now glusterfs
22:43 purpleidea droner: hekaFS was started by some RH internal guys (jdarcy) before redhat acquired glusterfs... now that work is all upstream in gluster.org (or as much of it that was able to be merged)
22:43 purpleidea and RH makes a downstream product called "Red Hat Storage" -- Sign up today!! :))
22:44 JoeJulian @commercial
22:44 glusterbot JoeJulian: Commercial support of GlusterFS is done as Red Hat Storage, part of Red Hat Enterprise Linux Server: see https://www.redhat.com/wapps/store/catalog.html for pricing also see http://www.redhat.com/products/storage/ .
22:44 shyam joined #gluster
22:45 purpleidea damn JoeJulian, you've got glusterbot traaaiiinnned
22:46 droner purpleidea: from https://www.gluster.org/community/documentation/index.php/Features/disk-encryption I see that it's considered still a "work in progress" and there's no public work/repo/git/whatever (at least I found none) that doesn't date back to years ago
22:46 glusterbot Title: Features/disk-encryption - GlusterDocumentation (at www.gluster.org)
22:47 purpleidea droner: if you want to do at rest encryption, build your bricks on top of LUKS.
22:50 purpleidea droner: also, maybe post a message to the ml about it if you want wider feedback
22:50 purpleidea @mailinglist
22:50 glusterbot purpleidea: I do not know about 'mailinglist', but I do know about these similar topics: 'mailinglists'
22:50 purpleidea ~mailinglists | droner
22:50 glusterbot droner: http://www.gluster.org/interact/mailinglists
22:51 JoeJulian @alias mailinglists mailinglist
22:51 glusterbot JoeJulian: The operation succeeded.
22:51 purpleidea i <3 JoeJulian
22:51 JoeJulian need a turing capable bot next...
22:52 jclift JoeJulian: https://www.mturk.com/mturk/welcome
22:52 glusterbot Title: Amazon Mechanical Turk - Welcome (at www.mturk.com)
22:53 JoeJulian hehe
22:53 purpleidea JoeJulian: wait... you're not a bot?
22:53 jclift Anyway, night fellas :)
22:53 JoeJulian night
22:53 purpleidea TIL JoeJulian _is_ turing complete, but he is _not_ a bot.
22:53 JoeJulian I'm thinking of outsourcing purpleidea to Bangalore...
22:54 purpleidea :(
23:20 gmcwhistler joined #gluster
23:54 social joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary