Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 and` joined #gluster
00:20 JoeJulian hagarth: "Another crawl is in progress" even though there isn't one. How can I clear that?
00:20 JoeJulian Can clear-locks clear that lock?
00:20 JoeJulian a2: ^
00:23 bala joined #gluster
00:29 Ark joined #gluster
00:37 gmcwhistler joined #gluster
00:51 JoeJulian @path or prefix
00:51 glusterbot JoeJulian: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
00:59 haomaiwang joined #gluster
01:02 gmcwhistler joined #gluster
01:07 Paul-C joined #gluster
01:12 mjsmith2 joined #gluster
01:15 davinder15 joined #gluster
01:20 gildub joined #gluster
01:39 MacWinner joined #gluster
01:52 baojg joined #gluster
02:06 sjm joined #gluster
02:32 baojg joined #gluster
02:46 bharata-rao joined #gluster
02:54 jvandewege_ joined #gluster
02:58 sspinner joined #gluster
03:16 dusmant joined #gluster
03:27 baojg joined #gluster
03:31 plarsen joined #gluster
03:39 kanagaraj joined #gluster
03:39 RameshN joined #gluster
03:46 kdhananjay joined #gluster
03:51 itisravi joined #gluster
03:52 shubhendu__ joined #gluster
03:58 nbalachandran joined #gluster
04:03 kshlm joined #gluster
04:09 kdhananjay joined #gluster
04:12 RameshN joined #gluster
04:12 spandit joined #gluster
04:21 nshaikh joined #gluster
04:24 sac`away` joined #gluster
04:24 prasanth|brb joined #gluster
04:24 kanagaraj_ joined #gluster
04:24 nbalachandran_ joined #gluster
04:24 kaushal_ joined #gluster
04:24 baojg joined #gluster
04:24 spandit_ joined #gluster
04:24 RameshN_ joined #gluster
04:24 itisravi_ joined #gluster
04:25 pureflex joined #gluster
04:33 prasanthp joined #gluster
04:37 coredump joined #gluster
04:37 ppai joined #gluster
04:39 psharma joined #gluster
04:40 vimal joined #gluster
04:44 bala joined #gluster
04:45 shubhendu__ joined #gluster
04:46 nshaikh joined #gluster
04:47 ramteid joined #gluster
04:53 ndarshan joined #gluster
04:57 lalatenduM joined #gluster
05:00 rjoseph joined #gluster
05:01 gmcwhistler joined #gluster
05:07 burnalot joined #gluster
05:10 bala joined #gluster
05:11 dusmant joined #gluster
05:11 kanagaraj joined #gluster
05:12 karnan joined #gluster
05:16 shubhendu__ joined #gluster
05:19 saurabh joined #gluster
05:20 hagarth joined #gluster
05:21 kumar joined #gluster
05:26 aravindavk joined #gluster
05:26 haomaiwang joined #gluster
05:30 vkoppad joined #gluster
05:32 kdhananjay joined #gluster
05:37 vkoppad joined #gluster
05:39 koobs joined #gluster
05:40 kanagaraj joined #gluster
05:42 baojg joined #gluster
05:44 vkoppad joined #gluster
05:48 vpshastry joined #gluster
05:51 rjoseph joined #gluster
05:51 aravindavk joined #gluster
05:52 dusmant joined #gluster
05:54 rastar joined #gluster
06:03 y4m4 joined #gluster
06:03 y4m4 JustinClift: ping
06:03 glusterbot y4m4: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
06:04 hagarth glusterbot: good one ;)
06:04 hagarth y4m4: what's up?
06:05 y4m4 hagarth: http://build.gluster.org/job/netbsd-smoke/3/console - do you know this?
06:05 glusterbot Title: netbsd-smoke #3 Console [Jenkins] (at build.gluster.org)
06:05 y4m4 hagarth: when did we get this one in?
06:06 hagarth y4m4: the first run happened yesterday at Jun 23, 2014 10:00:46 PM
06:07 y4m4 hagarth: hmm - need to speak to JustinClift
06:07 hagarth or rather a few minutes back today
06:08 gildub joined #gluster
06:08 y4m4 hagarth: he must have done that, since FreeBSD and FreeNAS team is getting some tests done so in-fact the smoke tests will be done across BSD's too :-)
06:09 hagarth y4m4: awesome!
06:09 hagarth y4m4: we need to find a mac too :)
06:10 y4m4 hagarth: there is a mac i am using but its on RH internal network
06:11 hagarth y4m4: that also might work as long as it votes on gerrit
06:16 davinder15 joined #gluster
06:16 itisravi joined #gluster
06:16 vijaykumar joined #gluster
06:16 darshan joined #gluster
06:16 kaushal_ joined #gluster
06:16 kanagaraj_ joined #gluster
06:16 lalatenduM_ joined #gluster
06:16 shubhendu_ joined #gluster
06:16 karnan_ joined #gluster
06:16 vpshastry1 joined #gluster
06:16 sac`away joined #gluster
06:16 ppai_ joined #gluster
06:16 rastar_ joined #gluster
06:16 prasanth|afk joined #gluster
06:16 spandit__ joined #gluster
06:16 RameshN__ joined #gluster
06:17 rgustafs joined #gluster
06:17 saurabh joined #gluster
06:17 kdhananjay1 joined #gluster
06:17 RameshN joined #gluster
06:17 bala2 joined #gluster
06:18 kumar joined #gluster
06:32 aravindavk joined #gluster
06:32 rjoseph joined #gluster
06:34 nbalachandran_ joined #gluster
06:39 dusmant joined #gluster
06:52 rtalur__ joined #gluster
06:52 shubhendu__ joined #gluster
06:52 aravinda_ joined #gluster
06:52 kaushal_ joined #gluster
06:52 ppai__ joined #gluster
06:52 vijay__ joined #gluster
06:52 lala__ joined #gluster
06:52 kanagaraj__ joined #gluster
06:52 spandit_ joined #gluster
06:52 sac`away` joined #gluster
06:52 vpshastry joined #gluster
06:52 RameshN_ joined #gluster
06:52 dusmantkp_ joined #gluster
06:52 prasanth|brb joined #gluster
06:52 itisravi_ joined #gluster
06:52 bala joined #gluster
06:53 rjoseph1 joined #gluster
06:53 kdhananjay joined #gluster
06:53 ndarshan joined #gluster
06:53 prasanthp joined #gluster
06:53 hagarth joined #gluster
06:57 RobertLaptop joined #gluster
06:57 _polto_ joined #gluster
06:57 _polto_ joined #gluster
06:58 ekuric joined #gluster
06:59 eseyman joined #gluster
07:02 vincent_vdk joined #gluster
07:07 glusterbot New news from newglusterbugs: [Bug 1112518] [FEAT/RFE] "gluster volume restart" cli option <https://bugzilla.redhat.com/show_bug.cgi?id=1112518> || [Bug 1112531] Dist-geo-rep : after deletion of files on master and parallely taking snapshot, history crawl gets stuck during processing changelogs in one of the nodes. <https://bugzilla.redhat.com/show_bug.cgi?id=1112531>
07:07 ctria joined #gluster
07:11 psharma joined #gluster
07:13 jvandewege joined #gluster
07:18 mbukatov joined #gluster
07:22 vincent_vdk joined #gluster
07:22 vincent_vdk joined #gluster
07:28 JordanHackworth_ joined #gluster
07:29 ktosiek joined #gluster
07:33 fsimonce joined #gluster
07:33 d-fence joined #gluster
07:50 liquidat joined #gluster
08:06 ctria joined #gluster
08:19 Nightshader If the glusterbot shows fixed bugs, when are new packages created?
08:33 nshaikh joined #gluster
08:48 NuxRo joined #gluster
08:48 hagarth joined #gluster
08:51 ricky-ti1 joined #gluster
08:53 calum_ joined #gluster
08:56 overclk joined #gluster
08:56 ndevos Nightshader: when a alpha/beta/final release has been made, the exact version should be set in the bug ('fixed in version' field, bug gets closed), packages for different distributions mostly land within a week
08:58 Nightshader ndevos: Thank you for the answer. We use http://download.gluster.org/pub/gluster/glusterfs/3.5/ but those are still from 17 april.
08:58 glusterbot Title: Index of /pub/gluster/glusterfs/3.5 (at download.gluster.org)
08:59 Nightshader ndevos: When should these get updated?
09:00 Nightshader I saw several issues being solved which we would like to avoid running into ( memleak in quota etc )
09:03 fraggeln Is anyone using gluster here as a backend for webapplications?
09:04 Nightshader2 joined #gluster
09:05 ndevos Nightshader2: ah, there are ,,(qe releases) too
09:05 glusterbot Nightshader2: I do not know about 'qe releases', but I do know about these similar topics: 'qa releases'
09:05 ndevos @qa releases
09:05 glusterbot ndevos: QA releases are now available here: http://goo.gl/c1f0UD -- the old QA release site is here: http://goo.gl/V288t3
09:06 ndevos Nightshader2: there you can find alpha/beta packages
09:20 ctria joined #gluster
09:29 baojg_ joined #gluster
09:37 ninkotech joined #gluster
09:37 ninkotech_ joined #gluster
09:40 RameshN joined #gluster
09:43 _polto_ joined #gluster
10:20 hagarth joined #gluster
10:21 dusmantkp_ joined #gluster
10:26 vimal joined #gluster
10:26 saurabh joined #gluster
10:31 Slashman joined #gluster
10:33 vimal joined #gluster
10:54 haomaiwang joined #gluster
10:55 shylesh__ joined #gluster
10:59 rwheeler joined #gluster
10:59 meghanam joined #gluster
11:02 meghanam_ joined #gluster
11:09 Slashman joined #gluster
11:14 kkeithley1 joined #gluster
11:15 glusterbot New news from resolvedglusterbugs: [Bug 1046624] Unable to heal symbolic Links <https://bugzilla.redhat.com/show_bug.cgi?id=1046624> || [Bug 1091340] Doc: Add glfs_fini known issue to release notes 3.5 <https://bugzilla.redhat.com/show_bug.cgi?id=1091340> || [Bug 1064096] The old Python Translator code (not Glupy) should be removed <https://bugzilla.redhat.com/show_bug.cgi?id=1064096> || [Bug 1071800] 3.5.1 Tracker <http
11:16 Nightshader joined #gluster
11:21 keytab joined #gluster
11:24 edward1 joined #gluster
11:28 dtrainor joined #gluster
11:37 ppai joined #gluster
11:51 nthomas joined #gluster
11:52 LebedevRI joined #gluster
11:53 edong23 joined #gluster
11:53 diegows joined #gluster
11:55 Nightshader Should ACL's just work using Samba3/4 on GlusterFS? ( I keep getting "access denied" errors using Samba 4.1.8 on Gluster 3.5 using XFS bricks )
11:59 gmcwhistler joined #gluster
11:59 karnan joined #gluster
12:04 lpabon joined #gluster
12:06 chirino joined #gluster
12:08 glusterbot New news from newglusterbugs: [Bug 1110262] suid,sgid,sticky bit on directories not preserved when doing add-brick <https://bugzilla.redhat.com/show_bug.cgi?id=1110262>
12:08 d-fence joined #gluster
12:15 itisravi_ joined #gluster
12:16 baojg joined #gluster
12:17 baojg joined #gluster
12:18 sage___ joined #gluster
12:34 stickyboy 'No. of heal failed entries: 472405'   \o/!!!
12:34 stickyboy At least I'm not split brain. :D
12:36 sroy_ joined #gluster
12:38 glusterbot New news from newglusterbugs: [Bug 1107649] glusterd fails to spawn brick , nfs and self-heald processes <https://bugzilla.redhat.com/show_bug.cgi?id=1107649>
12:42 fraggeln stickyboy: Im sorry mate
12:45 stickyboy fraggeln: :P
12:46 stickyboy fraggeln: I rsync'd as much of the data to this failed brick as possible.
12:47 stickyboy But it's really thrashing now... and my users are trying to use it.
12:47 stickyboy Argh.
12:48 fraggeln bennyturns, JoeJulian: After some testing and tuning, I managed to get around 150mbit/s write to 3 nodes, so, a total of 450mbit/s. sometimes even up to 200mbit.
12:48 jvandewege_ joined #gluster
12:51 coredump joined #gluster
12:54 hagarth joined #gluster
12:57 japuzzo joined #gluster
12:59 ekuric joined #gluster
12:59 julim joined #gluster
13:01 theron joined #gluster
13:01 gmcwhistler joined #gluster
13:06 baojg joined #gluster
13:08 glusterbot New news from newglusterbugs: [Bug 1061229] glfs_fini leaks threads <https://bugzilla.redhat.com/show_bug.cgi?id=1061229>
13:13 kshlm joined #gluster
13:13 fuz1on hello guys
13:13 fuz1on i got a question for u
13:14 fuz1on have you ever encountered high/bad response time with on a replication volume via swiftonfile / gluster-swift ?
13:14 fuz1on i dont experience that kind of behavior when i create a container
13:19 fuz1on my curl commands reports that it could take up to 2 second to complete a PUT request
13:19 fuz1on i'm on a 2 x 2 distribute-replicate volume
13:20 Ark joined #gluster
13:27 mjsmith2 joined #gluster
13:29 an joined #gluster
13:32 theron joined #gluster
13:40 theron joined #gluster
13:42 _polto_ joined #gluster
13:42 _polto_ joined #gluster
13:42 tom[] operational question. i have 3 webservers in a load-sharing/redundant pool. they use gluster to replicate user-uploaded files. each is a gluster server and a client of itself. peers know each other by ip address
13:42 tom[] now a server blows up so we build a new one from scratch to replace it and takes its ip address. before installing glusterfs on the replacement, the other nodes report "peer probe: success: host 10.1.1.7 port 24007 already in peer list."
13:42 tom[] what do i need to do to get the replacement into the gluster?
13:44 primechuck joined #gluster
13:44 theron joined #gluster
13:44 primechuck joined #gluster
13:50 theron joined #gluster
13:51 theron joined #gluster
13:53 davinder15 joined #gluster
13:59 NuxRo tom[]: you shuld look at brick-replace
13:59 ws2k33 joined #gluster
13:59 NuxRo and perhaps use a different IP address
14:00 NuxRo for the new node, that is, so it is not mistaken for the old one
14:00 tom[] NuxRo: i've been reading http://gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
14:00 glusterbot Title: Gluster 3.2: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
14:01 tom[] i'm not smart enough to cope with all this in my ansible playbooks
14:01 sjm joined #gluster
14:01 ctria joined #gluster
14:01 ws2k33 Hello, i am investegating the option to use gluster, but is there some kinda of operaiting system/web interface to make a server a gluster storage server ? i just found a video on youtube showing an installation and a web interface
14:03 hchiramm__ joined #gluster
14:07 daMaestro joined #gluster
14:09 NuxRo tom[]: do it manually then?
14:10 NuxRo also, i know purpleidea has been working hard on Puppet stuff, maybe there is already something like this from him
14:10 NuxRo https://github.com/purpleidea/puppet-gluster
14:10 tom[] i guess so
14:10 glusterbot Title: purpleidea/puppet-gluster · GitHub (at github.com)
14:12 hagarth ws2k33: you can use ovirt to manage gluster using a GUI
14:17 jobewan joined #gluster
14:19 ws2k33 yeah well i just saw some other people at youtube that had 4 fedora running and he did a gluster --deploy and console was showing an ip and key he entered the ip and key in his browser and he drployed the gluster in his browser
14:19 theron joined #gluster
14:22 an joined #gluster
14:23 theron joined #gluster
14:24 wushudoin joined #gluster
14:28 NuxRo ws2k33: you may be referring to https://forge.gluster.org/gluster-deploy/pages/Home
14:28 glusterbot Title: gluster-deploy - Home - Open wiki - Gluster Community Forge (at forge.gluster.org)
14:30 jag3773 joined #gluster
14:32 asku joined #gluster
14:44 Humble_ joined #gluster
14:45 ndk joined #gluster
14:48 deeville joined #gluster
14:49 deeville Is there a way to convert a 4-node distributed-replicated volume to replicated?
14:51 jobewan joined #gluster
14:52 jobewan joined #gluster
14:55 jobewan joined #gluster
14:57 NuxRo deeville: can you explain what you have now and what you want to achieve?
14:58 lmickh joined #gluster
14:58 jobewan joined #gluster
15:01 tdasilva joined #gluster
15:01 deeville NuxRo, a bit of research made me think I'm thinking in the wrong direction. Basically, I have a 2 x 2 = 4 node distributed-replicate volume, I'm using it the volume to store KVM images, and it makes it easier to (live) migrate VMs. What I didn't realize is that depending on where you create the VMs in the first place, it may or may not be on the local brick in a distributed-replicate volume. I.e. I'm running a VM on one node, but the VM image is actua
15:01 deeville lly loaded over the gluster network from a different node. (/var/kvm is a link to /mnt/gluster0/kvm)
15:04 deeville NuxRo, I want to switch to a replicated volume, so that all VM images exist on all the bricks
15:06 _polto_ joined #gluster
15:07 deeville I have a couple of VMs that are not starting, the images have to be mounted via the glusterfs client. It's possible that the cause is something else
15:10 Ark joined #gluster
15:12 mortuar joined #gluster
15:17 NuxRo deeville: your glusterfs client will always write to all replication partnets, so even if you have a local copy it's unlikely you'll any speed-up
15:18 NuxRo with 4x factor replica your (the client) will write 4-ways at the same time, greatly increasing latency and decreasing speed
15:18 NuxRo *partners
15:18 Ark joined #gluster
15:19 NuxRo the best thing you could do to increase performance is to leave things as they are but use the qemu gfapi interface to talk to gluster instead of the fuse mount
15:20 NuxRo of course, you need gluster 3.4+ and kvm/qemu that can talk to it
15:20 NuxRo (centos 6.5 satisfies both afaik)
15:22 psharma joined #gluster
15:22 deeville hmm..this qemu gfapi interface is new to me…I'm running 6.5 and gluster 3.4.1
15:22 deeville NuxRo, thanks, I'll do some research on it
15:23 deeville NuxRo, what advantages does qemu gfapi interface give over the native fuse client? Besides possibly solving my problem
15:30 theron joined #gluster
15:34 haomaiwang joined #gluster
15:40 GabrieleV left #gluster
15:40 NuxRo deeville: have a look at http://www.gluster.org/tag/libgfapi/
15:40 deeville NuxRo, thanks!
15:48 NuxRo you're welcome
16:05 an joined #gluster
16:13 RameshN joined #gluster
16:18 glusterbot New news from resolvedglusterbugs: [Bug 1061229] glfs_fini leaks threads <https://bugzilla.redhat.com/show_bug.cgi?id=1061229>
16:20 cfeller joined #gluster
16:24 Matthaeus joined #gluster
16:29 hchiramm_ joined #gluster
16:31 doo joined #gluster
16:31 Mo_ joined #gluster
16:40 Matthaeus joined #gluster
16:49 glusterbot New news from resolvedglusterbugs: [Bug 1112260] build: Glusterfs library file not compiled with RELRO or PIE <https://bugzilla.redhat.com/show_bug.cgi?id=1112260>
17:11 MacWinner joined #gluster
17:12 troj joined #gluster
17:14 zaitcev joined #gluster
17:18 purpleidea tom[]: hey
17:19 purpleidea tom[]: missed the context, but is there something you want puppet-gluster to do ?
17:20 deeville NuxRo, it was a split-brain issue LOL.
17:21 NuxRo :)
17:21 NuxRo purpleidea: he wants to replace a dead server with ansible, suggested he looked your puppet stuff up for inspiration
17:22 NuxRo not sure if you even deal with this case, but it seemed like good advice at the time :)
17:22 purpleidea NuxRo: ah, okay :)
17:23 purpleidea NuxRo: yeah either it will do what you/he wants (can handle some cases of this) or it could maybe be patched to support this type of thing... depending on what you want, some might prefer a more manual approach if data is involved (i don't typically say this ;))
17:23 NuxRo hehe
17:23 NuxRo ok
17:26 purpleidea NuxRo: to provide more context... as long as puppet has some metric for deciding a server is "dead" then it's not too many patches away...
17:31 NuxRo purpleidea: sounds good
17:31 * NuxRo doesn't use puppet /disclaimer
17:31 NuxRo :)
17:40 chirino joined #gluster
18:01 chirino joined #gluster
18:09 ramteid joined #gluster
18:10 SpeeR Does anyone know what version of gluster was DRC first implemented in and turned on by default? did it just start with 3.5?
18:11 ndevos SpeeR: yes, DRC for NFS in Gluster was introduced and enabled by default in 3.5.0, with 3.5.1 is it disabled by default
18:12 SpeeR excellent thanks ndevos, I had to disable it on ours, but the bosses wanted to know what we are losing by having it off
18:13 ndevos SpeeR: with DRC enabled you can get some performance improvements when you use NFS, unfortunately there have been some memory leaks and other instabilities
18:14 SpeeR yeh, one of my bricks died yesterday from it, and caused a split brain. First time I've had to deal with that... I've been lucky
18:15 Licenser left #gluster
18:17 ndevos SpeeR: depending on what info you have available, you may want to file a bug so that we can improve it, a coredump or backtrace in the nfs.log would be needed
18:17 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:17 psharma joined #gluster
18:18 * ndevos leaves for the day, it's past 8pm here already
18:18 SpeeR thanks for your help , have a good evening
18:23 ghenry joined #gluster
18:23 ghenry joined #gluster
18:30 coredump hi
18:30 glusterbot coredump: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:30 coredump jesus glusterbot chill out
18:34 irated joined #gluster
18:35 irated Should you use gluster nfs as a datastore for vmware?
18:38 sjm joined #gluster
19:05 sjm joined #gluster
19:09 jag3773 joined #gluster
19:23 dtrainor joined #gluster
19:32 coredump joined #gluster
19:36 sjm joined #gluster
19:36 zerick joined #gluster
19:49 glusterbot New news from resolvedglusterbugs: [Bug 988182] OOM: observed for fuse client process (glusterfs) when one brick from replica pairs were offlined and high IO was in progress from client <https://bugzilla.redhat.com/show_bug.cgi?id=988182>
19:53 n0de I am sure this has been asked a ton of times, but here goes again. I have a corrupted raid6 on one of my bricks (2 failed drives, which I was unable to ddrescue successfully). How do I go about getting the content resync'd onto the failed brick once I have the faulty drives replaced and the raid back to optimal?
19:53 n0de hostname and UUID of the brick will remain exactly the same
19:56 dtrainor joined #gluster
20:05 sjm joined #gluster
20:07 _polto_ joined #gluster
20:07 _polto_ joined #gluster
20:09 gildub joined #gluster
20:09 glusterbot New news from newglusterbugs: [Bug 1112844] OOM: observed for fuse client process (glusterfs) when one brick from replica pairs were offlined and high IO was in progress from client <https://bugzilla.redhat.com/show_bug.cgi?id=1112844>
20:10 silky joined #gluster
20:10 rotbeard joined #gluster
20:11 d-fence joined #gluster
20:13 coredump joined #gluster
20:18 JoeJulian Hrm... Why would a client not do a self-heal when the xattrs clearly indicate one being necessary...
20:18 semiosis cant reach a brick
20:20 sputnik13 joined #gluster
20:25 dtrainor joined #gluster
20:35 decimoe joined #gluster
20:39 sjm joined #gluster
20:42 n0de JoeJulian: so basically no action will be needed on my end? just wipe the raid and Gluster will detect the missing xattrs and do it's self-heal?
20:44 n0de This is a Distributed-Replicate volume btw
20:50 nueces joined #gluster
20:56 VerboEse joined #gluster
20:56 Georgyo joined #gluster
20:58 semiosis n0de: yes
20:59 n0de very nice, thanks!
20:59 semiosis simply put, if you replace a disk gluster will fill it up from a replica
20:59 semiosis just make sure everything else is the same... mount point, etc
21:00 semiosis might want to firewall that brick port so the healing doesn't even try to start until you're ready
21:03 mortuar_ joined #gluster
21:08 primechuck joined #gluster
21:09 primechuck joined #gluster
21:11 sjm joined #gluster
21:14 VerboEse joined #gluster
21:21 JoeJulian n0de: Actually, you will have to set the trusted.glusterfs.volume-id on the new brick root.
21:22 partner noticed the debian and ubuntu dirs went to "old" on download.gluster.org, whats your advice, should i grab the previous package and patch and build own debs ie. no more "support" for those platforms?
21:22 JoeJulian @ppa
21:22 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
21:23 partner ok, so no more debian, roger that
21:24 JoeJulian Really?
21:24 JoeJulian semiosis?
21:26 VerboEse joined #gluster
21:27 partner well i don't know, that's what i came to ask if you guys might know. if its no then at least i know to start working on it, was just assuming it would end up there as previously
21:28 partner if i'm the only one using debian i'm not asking to build it for me of course
21:29 partner maybe i could try out some ppa, just need to figure out which one of those plenty ubuntu versions i should target for wheezy
21:33 JoeJulian I doubt there's that much difference. I'd try whatever the latest is.
21:37 partner hmm, won't help, can't see 3.4.4-2 there, latest build 10 Jun but -2 was released on 19th
21:38 partner ok i will try to patch and build it, just running out of time before vacations, busy with all the crap on the table
21:38 n0de Thanks guys
21:39 JoeJulian Wait... he has a 3.4.4-2 build...
21:39 JoeJulian partner: this one https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.4qa
21:39 glusterbot Title: ubuntu-glusterfs-3.4qa : semiosis (at launchpad.net)
21:40 partner just realized its under qa.. thanks
21:40 sjm joined #gluster
21:40 MacWinner joined #gluster
21:45 theron joined #gluster
21:54 partner don't work with wheezy as the libc6 is on minor too old, no backport for it so it would require grabbing that from jessie
21:54 semiosis what about debian?
21:55 semiosis partner: you need 3.4.4-2 for wheezy right?
21:55 semiosis i'll work on that today
21:55 semiosis or tomorrow
21:57 partner well i got 3.4.4-1 from the community so was just wondering if the support for debian was dropped after that
21:57 semiosis actually, let me ask you this... are you affected by the bug this patch addresses?
21:57 JoeJulian @semiosis++
21:57 glusterbot JoeJulian: semiosis's karma is now 3
21:57 partner i can manage if i am the only user with wheezy
21:57 JoeJulian @semiosis+=1000000
21:57 JoeJulian damn...
21:58 semiosis you only need 3.4.4-2 if you need that rebalance patch
21:58 semiosis otherwise, just wait for 3.4.5
21:58 partner for what i can tell i'm affected by several bugs between the current version in use and this much waited -2
21:58 semiosis it's just one patch
21:58 JoeJulian So if you ever plan on rebalancing.
21:58 partner only things i do is add-brick && rebalance
21:58 semiosis heh, ok, then you might need this
21:58 semiosis i'm pretty disappointed in this whole -2 thing
21:59 semiosis there should be a 3.4.5 release if this patch is so important
21:59 partner if its not that far away then maybe i could wait
21:59 partner i don't want to bother too much with my issues, i assume there's not many debian users out there
21:59 semiosis partner: you're representative of them :)
21:59 semiosis and i want to help
22:00 partner thanks, much appreciated :)
22:00 semiosis but i've been really busy and haven't been diligent building packages because 1. it wasn't a real release, and 2. no one asked (until now)
22:00 dtrainor joined #gluster
22:00 partner and note my careful approach, don't want to push or rush anything
22:00 partner yeah, i fully understand
22:00 semiosis my policy generally is i dont roll patches for you, but when the gluster devs rolled a patched rpm I felt I should build the debs at least when peopel asked for them
22:01 semiosis but i really just want there to be another release
22:01 semiosis </rant>
22:01 JoeJulian @karma
22:01 glusterbot JoeJulian: Highest karma: "semiosis" (3) and "jjulian" (0). Lowest karma: "jjulian" (0) and "semiosis" (3).
22:02 semiosis partner: if i dont say "no" then please do bug me until i do it :)
22:02 semiosis i really will try to get to it today or tmrw
22:03 partner no hurry :)
22:06 glusterbot joined #gluster
22:07 semiosis afk
22:08 Pupeno joined #gluster
22:08 partner maybe fpm or similar would ease up all the packagers work, i'm assuming everybody does it now on their own corners
22:09 partner the preface starts with: "Package maintainers work hard and take a lot of shit. You can't please everyone. So, if you're a maintainer: Thanks for maintaining packages!"
22:09 partner https://github.com/jordansissel/fpm
22:09 glusterbot Title: jordansissel/fpm · GitHub (at github.com)
22:10 sjm joined #gluster
22:10 JoeJulian Real familiar with whack and his fpm and, of course, logstash. fpm doesn't do releaseable rpms. Not sure about debs.
22:11 JoeJulian @semiosis++
22:11 glusterbot JoeJulian: semiosis's karma is now 2000002
22:11 JoeJulian There, that's more like it.
22:19 partner :)
22:20 partner alright, time to get some sleep, thanks again guys
22:32 Matthaeus I just placed an order for 6x 4 TB drives for my house.  Operation Matt Doesn't Need To Buy More Storage Until 2018 is underway.
22:34 irated how much did you pick those up for
22:34 irated ?
22:35 glusterbot New news from newglusterbugs: [Bug 1112518] [FEAT/RFE] "gluster volume restart" cli option <https://bugzilla.redhat.com/show_bug.cgi?id=1112518> || [Bug 1107649] glusterd fails to spawn brick , nfs and self-heald processes <https://bugzilla.redhat.com/show_bug.cgi?id=1107649> || [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.com/show_bug.cgi?id=
22:37 glusterbot New news from resolvedglusterbugs: [Bug 1046624] Unable to heal symbolic Links <https://bugzilla.redhat.com/show_bug.cgi?id=1046624> || [Bug 1112260] build: Glusterfs library file not compiled with RELRO or PIE <https://bugzilla.redhat.com/show_bug.cgi?id=1112260> || [Bug 1091340] Doc: Add glfs_fini known issue to release notes 3.5 <https://bugzilla.redhat.com/show_bug.cgi?id=1091340> || [Bug 1064096] The old Python Tran
22:39 SpeeR if I have a nfs share, with 4 bricks, does the file transfer get handed off to which ever server the files are going to live on? Or does everything go through the mounted brick, and then to it's final resting place?
22:40 theron joined #gluster
22:46 theron joined #gluster
22:50 irated SpeeR I know your alias now.
22:50 burn420 joined #gluster
22:50 * irated returns to idle
22:50 SpeeR stalker
22:50 Matthaeus irated: About $180 each.
22:50 Matthaeus 3 different sources, so shipping costs varied.
22:51 irated Matthaeus, not bad. I'm thinking about buying a few.
22:51 SpeeR if only ssd were that cheap
22:52 Matthaeus These are WD red 5400 rpm drives.
22:52 Matthaeus Supposedly meant for NAS.
22:55 vpshastry joined #gluster
23:03 sjm joined #gluster
23:13 mjsmith2 joined #gluster
23:24 Paul-C joined #gluster
23:45 sjm joined #gluster
23:46 Pupeno joined #gluster
23:49 gildub joined #gluster
23:49 sjm left #gluster
23:51 primechuck joined #gluster
23:51 Pupeno joined #gluster
23:52 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary