Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 samppah joined #gluster
00:20 lyang0 joined #gluster
00:26 diegows joined #gluster
00:35 purpleidea joined #gluster
00:35 recidive joined #gluster
00:50 lyang0 joined #gluster
00:51 gdubreui joined #gluster
00:53 yinyin_ joined #gluster
00:56 akay joined #gluster
01:12 theron joined #gluster
01:19 RicardoSSP joined #gluster
01:32 theron joined #gluster
01:37 yinyin joined #gluster
01:42 recidive joined #gluster
01:45 DV joined #gluster
01:57 harish joined #gluster
02:28 Ark joined #gluster
02:32 harish joined #gluster
02:32 suliba joined #gluster
02:42 bala joined #gluster
03:06 davinder11 joined #gluster
03:23 kanagaraj joined #gluster
03:24 RameshN joined #gluster
03:28 RameshN joined #gluster
03:52 itisravi joined #gluster
03:53 bharata-rao joined #gluster
03:58 nishanth joined #gluster
03:58 kdhananjay joined #gluster
03:58 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
04:06 shubhendu joined #gluster
04:07 haomaiwang joined #gluster
04:08 ndarshan joined #gluster
04:10 bala joined #gluster
04:10 ppai joined #gluster
04:12 psharma joined #gluster
04:12 saltsa joined #gluster
04:15 Ark joined #gluster
04:18 dusmant joined #gluster
04:20 ngoswami joined #gluster
04:34 Pupeno joined #gluster
04:38 glusterbot New news from newglusterbugs: [Bug 1101382] [RFE] glusterd log could also add hostname or ip along with host's UUID <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101382>
04:41 raghu joined #gluster
04:42 haomaiwa_ joined #gluster
04:43 kumar joined #gluster
04:48 davinder11 joined #gluster
04:51 yinyin_ joined #gluster
04:52 haomai___ joined #gluster
04:53 aravindavk joined #gluster
04:56 itisravi joined #gluster
05:11 sputnik13 joined #gluster
05:11 spandit joined #gluster
05:12 prasanthp joined #gluster
05:14 ppai joined #gluster
05:17 y4m4 joined #gluster
05:23 nshaikh joined #gluster
05:29 psharma joined #gluster
05:32 kanagaraj joined #gluster
05:33 lalatenduM joined #gluster
05:35 shylesh__ joined #gluster
05:37 kshlm joined #gluster
05:37 ProT-0-TypE joined #gluster
05:39 hagarth joined #gluster
05:43 Pupeno joined #gluster
05:46 hagarth joined #gluster
05:54 rastar joined #gluster
06:11 vpshastry joined #gluster
06:12 Ark joined #gluster
06:12 ngoswami joined #gluster
06:16 micu joined #gluster
06:16 ricky-ti1 joined #gluster
06:17 rjoseph joined #gluster
06:17 rtalur_ joined #gluster
06:17 dusmant joined #gluster
06:18 sputnik13 joined #gluster
06:23 rahulcs joined #gluster
06:28 hagarth joined #gluster
06:28 jag3773 joined #gluster
06:29 bala1 joined #gluster
06:35 vimal joined #gluster
06:38 huleboer joined #gluster
06:42 bala1 joined #gluster
06:47 latha joined #gluster
06:49 vpshastry left #gluster
06:50 meghanam joined #gluster
06:51 hchiramm_ joined #gluster
06:55 ktosiek joined #gluster
07:08 mbukatov joined #gluster
07:08 vpshastry joined #gluster
07:08 ProT-0-TypE joined #gluster
07:08 mbukatov joined #gluster
07:08 getup- joined #gluster
07:10 eseyman joined #gluster
07:13 keytab joined #gluster
07:15 edward1 joined #gluster
07:18 liquidat joined #gluster
07:21 ppai joined #gluster
07:23 hchiramm_ joined #gluster
07:23 Thilam hi there, i have a problem with the dir quota system, is there someone who can help ?
07:25 Thilam when the quota is overpassed, beside the fact the "gluster quota info" show the right value, it take some time to the client get the message "quota exceeded"
07:25 ctria joined #gluster
07:27 Thilam and during this time the client can continue write stuff on the directory
07:27 Thilam and i'm just talking alone \o/
07:29 ndevos Thilam: I've not done a lot with quota myself, but I've seen that several fixes were sent recently, what version are you on?
07:30 Thilam hi :)
07:30 Thilam the "latest"
07:30 Thilam 3.5
07:31 Thilam it's realy strange because like I told you before, servers got the good info, they just take time to tell the client : "you can't write anymore into this dir"
07:32 Thilam and so the quota may be exceeded from several GB
07:32 Thilam which is not good :/
07:33 sputnik13 joined #gluster
07:34 Thilam but the reverse is not true, once the client get the message "quota exceeded" if it deletes stuff to go back under the quota it could start immediately to write in the dir
07:37 ndevos there is a work in progress to document the functioning of quota, but its very rough atm: http://review.gluster.org/#/c/7882​/2/doc/features/quota-scalability.md,unified
07:37 glusterbot Title: Gerrit Code Review (at review.gluster.org)
07:38 Thilam i've just been answered on the ML by Vijay
07:38 Thilam he told me to put features.quota-timeout to 0 value
07:39 Thilam I'm going to test
07:39 ndevos this has some descriptions too: https://access.redhat.com/site/documentati​on/en-US/Red_Hat_Storage/2.1/html-single/A​dministration_Guide/index.html#idm64701520
07:39 glusterbot Title: Administration Guide (at access.redhat.com)
07:39 micu1 joined #gluster
07:40 ndevos hagarth: quota!
07:42 coredumb what is the reason for gluster to not want to create a volume on a partition and to prefer a subdir instead ?
07:44 ndevos coredumb: like the advise to have a mountpoint like /bricks/volume-b1 and use /bricks/volume-b1/data for the brick on 'glustre volume create'?
07:44 Thilam same thing after changing the quota-timeout value
07:45 Thilam in a 50MB dir quota, I'm writing till 2GB before I got the overquota message :/
07:45 coredumb ndevos: yes
07:45 hagarth Thilam: 50 MB is a very low value, can you please check with a higher value?
07:46 fsimonce joined #gluster
07:46 Thilam that's why I'm doing :)
07:46 Thilam just before you asked :p
07:46 ndevos coredumb: right, so when you reboot, /bricks/volume-b1 is always there, even if mountinh that partition/LV failed - the brick process will then export your rootfs
07:47 Thilam ho it's better
07:47 ndevos coredumb: by adding a directory, the brick process complains that it is missing, and will exit again
07:48 coredumb ndevos: ohhhhhhh
07:48 coredumb ok i get it :)
07:48 ndevos :)
07:48 shubhendu joined #gluster
07:49 Thilam ok problem solved with larger quota
07:49 Thilam which are more realistic
07:50 Thilam thank you guys
07:51 Thilam hum I talk to quickly
07:51 Thilam 5GB quota juste exceeded from 1,5GB
07:52 hagarth Thilam: are you using dd on a single client to perform these writes?
07:52 ppai joined #gluster
07:52 Thilam I use both dd and file copy
07:52 Thilam and yes, it's from a single client
07:53 hagarth Thilam: how many files in all?
07:53 hagarth rather how many files are being written as part of this test?
07:53 Thilam same file of 100MB I copied x times by just changing last char
07:54 Thilam so 65 files to reach 6,5GB
07:55 hagarth Thilam: bbiab
07:57 Thilam bbiab? be back in a ?
07:57 Thilam boment ? :D
08:00 jwww_ bit ;)
08:00 jwww_ Hello.
08:00 glusterbot jwww_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:04 Thilam *
08:04 Thilam [10:00] <jwww_> bit ;)
08:04 Thilam thx
08:12 harish joined #gluster
08:20 Norky joined #gluster
08:23 Philambdo joined #gluster
08:38 rtalur_ joined #gluster
08:42 Gugge joined #gluster
08:44 Thilam ndevos : the administration guide you provide me was very helpful
08:44 coredumb question about quorum
08:45 Thilam there is no pb with the quota, it was just a "check quota timer" issue
08:45 coredumb if i understood correctly, i can add a node to the peer list that doesn't have to host bricks and will only be used as a "whitness" to prevent spli brains right ?
08:45 edward1 joined #gluster
08:54 hagarth joined #gluster
08:56 karnan joined #gluster
08:56 olisch joined #gluster
08:56 harish joined #gluster
09:03 ngoswami joined #gluster
09:20 kshlm joined #gluster
09:23 vpshastry joined #gluster
09:23 hagarth joined #gluster
09:24 deepakcs joined #gluster
09:30 jcsp joined #gluster
09:34 Slashman joined #gluster
09:39 glusterbot New news from newglusterbugs: [Bug 1101479] glusterfs process crash when creating a file with command touch on stripe volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101479>
09:44 jcsp joined #gluster
09:57 haomaiwa_ joined #gluster
09:59 atinmu joined #gluster
10:08 karimb joined #gluster
10:12 calum_ joined #gluster
10:12 karnan joined #gluster
10:15 kkeithley1 joined #gluster
10:23 kaushal_ joined #gluster
10:31 ira joined #gluster
10:42 ctria joined #gluster
10:50 getup- joined #gluster
10:55 ProT-0-TypE joined #gluster
10:56 recidive joined #gluster
11:05 chirino joined #gluster
11:08 joostini joined #gluster
11:09 joostini hi
11:09 glusterbot joostini: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:11 joostini i started using glusterfs. But now i read about a web manager. There is spoken of it in the documentation. and when i search google: people are talking about it. But nowhere i can download it/ installation instructions. Also the iso does not seem to exist. What can i do ?\
11:12 samppah joostini: if you are referring to Gluster Storage Platform it was discontinued long time ago
11:12 samppah currently you can use oVirt to manage GlusterFS volumes
11:14 joostini i have taken a look @ oVirt, but you need to open Root access on your ssh to work with RSA keys.. which i do not prefer
11:15 joostini you know a good guide/reference ?
11:17 diegows joined #gluster
11:18 samppah joostini: nope, i'm using RHEV/oVirt but only with virtualization.. quite happy with gluster cli :)
11:18 samppah @puppet-gluster
11:18 samppah @puppet
11:18 glusterbot samppah: https://github.com/purpleidea/puppet-gluster
11:19 samppah ^ that might also be worth looking for
11:22 vpshastry joined #gluster
11:23 Philambdo joined #gluster
11:23 ndarshan joined #gluster
11:23 karnan joined #gluster
11:23 RameshN joined #gluster
11:24 dusmant joined #gluster
11:26 mmorsi joined #gluster
11:28 joostini yes i already got gluster working trough puppet only a different module, but i might can look @ that one also. thx
11:29 joostini (but still... it is a pitty there is no build in web sollution anymore)
11:30 Chewi joined #gluster
11:31 samppah joostini: and then there is this https://forge.gluster.org/gluster-deploy
11:31 glusterbot Title: gluster-deploy - Gluster Community Forge (at forge.gluster.org)
11:31 haomaiwa_ joined #gluster
11:32 samppah http://www.youtube.com/watch?v=UxyPLnlCdhA
11:32 glusterbot Title: Gluster Setup...The Easy Way - YouTube (at www.youtube.com)
11:35 joostini ok tnx. looking into it :)
11:36 rtalur_ joined #gluster
11:38 haomaiwang joined #gluster
11:39 edward1 joined #gluster
11:41 haomaiwa_ joined #gluster
11:49 haomaiwa_ joined #gluster
11:51 haomai___ joined #gluster
11:58 Chewi hello. I've set up geo-rep under 3.5 over a slow link. there are 75GB of data and I already have it present on the slave. the data has been fully synced with rsync beforehand but gluster still seems to insist on fully resyncing everything. I left it overnight and it's still chewing up all our bandwidth in hybrid crawl mode. is there any way to tell it that the slave is already up to date? I know it probably has to generate some m
11:58 Chewi etadata but does that really mean all the data has to be resent?
12:00 ndarshan joined #gluster
12:01 RameshN joined #gluster
12:04 B21956 joined #gluster
12:05 karnan joined #gluster
12:10 glusterbot New news from newglusterbugs: [Bug 1099369] [barrier] null gfid is shown in state-dump file for respective barrier fops when barrier is enable and state-dump has taken <https://bugzilla.redhat.co​m/show_bug.cgi?id=1099369>
12:10 davinder11 joined #gluster
12:10 rtalur_ joined #gluster
12:17 haomaiwa_ joined #gluster
12:17 sjm joined #gluster
12:17 derelm joined #gluster
12:18 derelm hi i am trying to remove a number of bricks from a volume - it warns me about dataloss - how do i do that without actually losing data?
12:19 haomaiwa_ joined #gluster
12:22 flowouffff Hello guys
12:23 flowouffff Anyone knows why i cant get an auth-token with gluster-swift ? It returns a  bad request response when I hit http://localhost:8080/auth/v1.0 ??
12:24 flowouffff my config is openstack swift havan with gluster-swift 1.10
12:24 flowouffff on centos
12:26 flowouffff nothing in logs to help me out :/
12:29 haomaiwa_ joined #gluster
12:30 haomaiwang joined #gluster
12:32 japuzzo joined #gluster
12:32 ramteid joined #gluster
12:32 foobar derelm: do you have redundancy in your bricks ?
12:33 kdhananjay exit
12:33 derelm foobar: it is a very simple test setup - i started with a replica 2 set with two bricks, later i added two bricks making this a distribute + replicate set - now i want to remove the bricks i added last
12:34 foobar you can remove with --force
12:35 chirino joined #gluster
12:35 kkeithley_ GlusterFS-3.4.4beta1 and GlusterFS-3.5.1beta1 RPMs for el5-7 (RHEL,  CentOS, etc.) and Fedora 19-21 are now available in the YUM repos at http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.4.4beta1/  and http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.5.1beta1/
12:35 glusterbot Title: Index of /pub/gluster/glusterfs/qa-releases/3.4.4beta1 (at download.gluster.org)
12:36 foobar but since it's distributed... maybe stop te brick, and manually rsync it to the volume again
12:36 haomaiwang joined #gluster
12:39 kke joined #gluster
12:41 kke i'm in middle of moving from our own glusterfs to 3rd party hosted glusterfs, our server is 3.2 and clients are 3.2, theirs is 3.4. is there some way to connect to both from the same app server?
12:42 hagarth joined #gluster
12:43 haomaiwang joined #gluster
12:43 chirino joined #gluster
12:45 rahulcs joined #gluster
12:46 Ark joined #gluster
12:46 kke maybe i must switch to nfs on the old one so that i can upgrade the clients
12:47 dusmant joined #gluster
12:49 sroy joined #gluster
12:50 kkeithley_ kke: yes, use nfs
12:51 haomaiwang joined #gluster
12:56 theron_ joined #gluster
12:57 haomaiwang joined #gluster
12:57 karimb joined #gluster
12:57 ctria joined #gluster
12:58 ira joined #gluster
12:59 haomaiwang joined #gluster
13:03 haomaiwa_ joined #gluster
13:04 kke would be nice if new clients could still connect to old servers
13:08 primechuck joined #gluster
13:09 haomaiwang joined #gluster
13:10 rahulcs joined #gluster
13:14 haomaiwang joined #gluster
13:17 haomaiwa_ joined #gluster
13:22 jruggiero joined #gluster
13:24 haomaiwang joined #gluster
13:25 plarsen joined #gluster
13:26 haomaiwa_ joined #gluster
13:32 haomaiwang joined #gluster
13:34 sprachgenerator joined #gluster
13:34 haomaiwang joined #gluster
13:38 karimb joined #gluster
13:41 bet_ joined #gluster
13:45 haomaiwa_ joined #gluster
13:46 coredump joined #gluster
13:46 bala1 joined #gluster
13:47 coredump joined #gluster
13:51 chirino joined #gluster
13:59 rahulcs joined #gluster
14:04 chirino joined #gluster
14:05 bene2 joined #gluster
14:09 ekuric joined #gluster
14:10 glusterbot New news from newglusterbugs: [Bug 1101561] Able to delete/restore even when glusterd quorum doesnt meet <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101561>
14:10 chirino_m joined #gluster
14:12 jbd1 joined #gluster
14:15 P0w3r3d joined #gluster
14:23 coredumb question about quorum, if i understood correctly, i can add a node to the peer list that doesn't have to host bricks and will only be used as a "whitness" to prevent spli brains right ?
14:28 lpabon joined #gluster
14:28 plarsen joined #gluster
14:33 wushudoin joined #gluster
14:40 ndevos coredumb: yes, that is an option
14:40 ndevos coredumb: but, I'm not sure which quorum options you need to enable...
14:52 kshlm joined #gluster
14:52 sprachgenerator joined #gluster
15:14 al joined #gluster
15:21 fsimonce joined #gluster
15:23 keytab joined #gluster
15:26 vpshastry joined #gluster
15:33 lmickh joined #gluster
15:38 Ark_ joined #gluster
15:48 daMaestro joined #gluster
15:52 jag3773 joined #gluster
15:58 sroy joined #gluster
16:04 vimal joined #gluster
16:13 vpshastry joined #gluster
16:14 jbd1 joined #gluster
16:15 vpshastry left #gluster
16:16 ProT-0-TypE joined #gluster
16:18 _dist joined #gluster
16:19 _dist Good afternoon, in about a week or so I'll be adding a third replicate brick to a volume. They are all over 10Gbe, but I've noticed a lot of people talking about this making their volume unusable during the heal period
16:20 _dist I've never experienced anything like that myself in smaller volume changes, but this is much more data, and more important that it stays up
16:20 fsimonce joined #gluster
16:20 _dist any thoughts? experience?
16:23 Mo__ joined #gluster
16:27 [o__o] joined #gluster
16:29 semiosis _dist: you might want to try changing the heal alg, i get better performance from "full" for my workload. there's no point in diffing my files since they never change once they're created
16:30 semiosis also you can set how many files are healing in parallel, this can help manage the load
16:30 semiosis not sure if that's a documented option now, or if it's still an ,,(undocumented options)
16:30 glusterbot Undocumented options for 3.4: http://www.gluster.org/community/documentat​ion/index.php/Documenting_the_undocumented
16:33 _dist semiosis: I've never actually seen a problem, so really I'm just asking if I should be worried :) I know that's a vague question, we're talking about roughly 4TB of data, 9 raided disks per brick (so no read bottleneck).
16:33 _dist I suppose I could just kill the shd on the new brick if things go nuts
16:33 firemanxbr joined #gluster
16:33 ProT-O-TypE joined #gluster
16:36 olisch joined #gluster
16:37 jobewan joined #gluster
16:47 MeatMuppet joined #gluster
16:54 sroy_ joined #gluster
16:59 y4m4 joined #gluster
17:00 rwheeler joined #gluster
17:10 zerick joined #gluster
17:11 glusterbot New news from newglusterbugs: [Bug 1101647] gluster volume heal volname statistics heal-count not giving desired output. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101647>
17:11 rotbeard joined #gluster
17:12 semiosis _dist: i tried to give you the information you need to be successful in your task.  disregard it at your own peril
17:26 chirino joined #gluster
17:33 kanagaraj joined #gluster
17:38 systemonkey joined #gluster
17:46 wushudoin joined #gluster
17:56 ramteid joined #gluster
18:02 vpshastry joined #gluster
18:04 y4m4 joined #gluster
18:04 sjm joined #gluster
18:06 [o__o] joined #gluster
18:09 zaitcev joined #gluster
18:10 mtrythall joined #gluster
18:11 mtrythall [o__o]: ping
18:11 [o__o] Are you in need of my services, mtrythall?
18:11 glusterbot mtrythall: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
18:15 jcsp joined #gluster
18:15 calum_ joined #gluster
18:32 primechuck joined #gluster
18:41 glusterbot New news from newglusterbugs: [Bug 1101691] [barrier] Spelling correction in glusterd log message while enabling/disabling barrier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101691>
18:45 mtrythall left #gluster
18:51 mjrosenb joined #gluster
18:52 primeministerp joined #gluster
18:52 daMaestro joined #gluster
18:54 sputnik13 joined #gluster
18:55 [o__o] joined #gluster
18:56 chirino joined #gluster
19:02 y4m4 joined #gluster
19:09 nueces joined #gluster
19:09 rahulcs joined #gluster
19:10 lpabon_ joined #gluster
19:11 chirino joined #gluster
19:23 tjikkun joined #gluster
19:23 tjikkun joined #gluster
19:37 marmalodak joined #gluster
19:38 bennyturns joined #gluster
19:39 micu1 joined #gluster
19:40 sroy joined #gluster
19:46 Philambdo joined #gluster
19:51 SpeeR joined #gluster
19:51 SpeeR is there another location for this URL? http://www.gluster.org/community/d​ocumentation/index.php/Features35
19:51 glusterbot Title: Features35 - GlusterDocumentation (at www.gluster.org)
19:55 SpeeR the planning page has what I needed
19:56 mattapperson joined #gluster
19:58 rotbeard joined #gluster
20:00 gdubreui joined #gluster
20:10 mjrosenb hrmm, mounting a volume seems to be hanging.
20:11 plarsen joined #gluster
20:12 mjrosenb [2014-05-27 12:50:05.508846] E [glusterfsd-mgmt.c:1783:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: Transport endpoint is not connected
20:12 mjrosenb well, no shortage of errors today!
20:14 mjrosenb ok, restarting the config-server fixed it
20:14 mjrosenb man I wish I knew what goes wrong every few months.
20:14 akay joined #gluster
20:21 primeministerp joined #gluster
20:36 mjsmith2 joined #gluster
20:41 glusterbot New news from newglusterbugs: [Bug 1101757] Out of date instructions for getting started guide <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101757>
20:43 decimoe joined #gluster
20:43 silky joined #gluster
20:45 rwheeler joined #gluster
20:49 tdasilva left #gluster
20:57 liammcdermott joined #gluster
20:58 ProT-0-TypE joined #gluster
21:06 * jbd1 noticing @glusterbot's new bug listing-- one of the main reasons I wound up on glusterfs instead of Ceph or Riak was the fact that the "getting started" instructions for both of those were wrong
21:07 jbd1 if you're listing commands people should copy and paste, it's essential to be sure they are correct!
21:08 jbd1 I could have figured out the issues and gone further with them, but it suggests that the rest of the documentation, and the software itself, is sloppy
21:08 mattappe_ joined #gluster
21:19 olisch joined #gluster
21:20 calum_ joined #gluster
21:26 semiosis it's a public wiki.  the reporter could have just as easily fixed the doc instead of reporting in BZ
21:31 purpleidea semiosis: with the new vagrant work that i've done for F20, i think there's an argument to be made that this should be the recommended "getting started" so that all the newbies play with gluster before coming here with questions... agree/disagree?
21:32 semiosis agree
21:32 purpleidea jbd1: want to update the wiki :) ^
21:34 semiosis i will (try to get around to it later) if no one else does
21:35 purpleidea semiosis: okay. ping me and i can help with the puppet-gluster + vagrant part. the vagrant setup article you should reference is: https://ttboj.wordpress.com/2014/05/13/​vagrant-on-fedora-with-libvirt-reprise/
21:35 purpleidea and shit "just works" it seems :)
21:35 semiosis whoa i just meant update re: bug 1101757
21:35 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1101757 unspecified, unspecified, ---, divya, NEW , Out of date instructions for getting started guide
21:35 purpleidea semiosis: oh :P
21:36 purpleidea glusterbot: you're getting more and more helpful!
21:36 semiosis purpleidea: feel free to start sketching what you have in mind on the talk side of that article :)
21:40 purpleidea semiosis: done
21:58 recidive joined #gluster
22:08 jbd1 semiosis: I understand the reasoning for opening a bug there-- the current instructions for ubuntu say to download the debs directly from download.gluster.org and the ticket opener is suggesting the user add-apt-repository your ppa
22:08 jbd1 semiosis: is your PPA "official" enough to be the documented way to install glusterfs on ubuntu?
22:24 Gugge joined #gluster
22:25 purpleidea jbd1: the semiosis's ppa is the shit. it's the only ubuntu gluster packages i use :)
22:31 stickyboy joined #gluster
22:31 stickyboy joined #gluster
22:37 semiosis jbd1: SO OFFICIAL!!!1ELEVEN
22:37 semiosis http://download.gluster.org/pub/gluster/​glusterfs/3.5/3.5.0/Ubuntu/Ubuntu.README
22:44 marcoceppi semiosis: did your packages ever make it in to trusty?
22:44 semiosis 3.4.2 did, http://packages.ubuntu.com/source/trusty/glusterfs
22:45 glusterbot Title: Ubuntu – Details of source package glusterfs in trusty (at packages.ubuntu.com)
22:47 [o__o] joined #gluster
22:48 sjm joined #gluster
22:49 semiosis marcoceppi: universe though, not main
22:50 marcoceppi semiosis: ah
22:50 semiosis marcoceppi: glusterfs didnt pass the code review :/
22:50 marcoceppi dang
22:50 mjsmith2 joined #gluster
22:52 semiosis i've seen some activity on the dev ML (though haven't been following closely) about it so i think maybe someone's working on the issues
22:54 fidevo joined #gluster
22:58 [o__o] joined #gluster
23:01 gdubreui joined #gluster
23:02 MugginsM joined #gluster
23:28 sjm left #gluster
23:44 theron joined #gluster
23:50 jbd1 purpleidea: Same here, I can't imagine using anything else.
23:51 theron joined #gluster
23:55 theron_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary