Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 diegows joined #gluster
00:13 sprachgenerator joined #gluster
00:48 nmbr_ joined #gluster
01:08 bala joined #gluster
01:25 kbyrne joined #gluster
01:27 n-st joined #gluster
01:53 nangthang joined #gluster
02:04 harish joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:12 hagarth joined #gluster
03:12 bharata-rao joined #gluster
03:15 sprachgenerator joined #gluster
03:17 sprachgenerator_ joined #gluster
03:18 kdhananjay joined #gluster
03:36 nhayashi joined #gluster
03:37 rejy joined #gluster
03:46 hagarth joined #gluster
03:47 shubhendu joined #gluster
03:50 itisravi joined #gluster
03:51 kanagaraj joined #gluster
03:55 rjoseph joined #gluster
04:06 ndarshan joined #gluster
04:09 atinmu joined #gluster
04:10 dusmant joined #gluster
04:10 ppai joined #gluster
04:19 deepakcs joined #gluster
04:21 soumya joined #gluster
04:27 soumya joined #gluster
04:35 soumya_ joined #gluster
04:36 anoopcs joined #gluster
04:36 jiffin joined #gluster
04:44 nbalacha joined #gluster
04:45 kshlm joined #gluster
04:52 atalur joined #gluster
04:58 soumya_ joined #gluster
05:05 rafi joined #gluster
05:06 nbalacha joined #gluster
05:08 schandra joined #gluster
05:16 prasanth_ joined #gluster
05:25 karnan joined #gluster
05:28 lalatenduM joined #gluster
05:28 ppai joined #gluster
05:29 Manikandan joined #gluster
05:29 Manikandan_ joined #gluster
05:29 Manikandan_ left #gluster
05:29 prasanth_ joined #gluster
05:31 kumar joined #gluster
05:35 lalatenduM joined #gluster
05:40 lalatenduM joined #gluster
05:42 overclk joined #gluster
05:43 spandit joined #gluster
05:44 mikemol joined #gluster
05:48 hagarth joined #gluster
05:50 meghanam joined #gluster
05:54 soumya_ joined #gluster
05:58 ramteid joined #gluster
06:00 atalur joined #gluster
06:13 maveric_amitc_ joined #gluster
06:20 gem joined #gluster
06:21 vimal joined #gluster
06:28 raghu joined #gluster
06:31 sac`away joined #gluster
06:33 sac`away joined #gluster
06:36 dusmant joined #gluster
06:41 nshaikh joined #gluster
06:44 nshaikh joined #gluster
06:45 deepakcs joined #gluster
06:46 navid__ joined #gluster
06:47 nshaikh joined #gluster
06:53 atalur joined #gluster
06:54 sac`away joined #gluster
06:55 sac`away joined #gluster
06:57 nangthang joined #gluster
07:02 bala joined #gluster
07:16 glusterbot News from newglusterbugs: [Bug 1188184] Tracker bug :  NFS-Ganesha new features support for  3.7. <https://bugzilla.redhat.com/show_bug.cgi?id=1188184>
07:16 glusterbot News from newglusterbugs: [Bug 1195120] DHT + epoll : client crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1195120>
07:16 mbukatov joined #gluster
07:17 DV__ joined #gluster
07:19 atalur joined #gluster
07:21 Manikandan joined #gluster
07:23 jtux joined #gluster
07:23 LebedevRI joined #gluster
07:32 sac`away joined #gluster
07:46 [Enrico] joined #gluster
07:51 Philambdo joined #gluster
07:52 kdhananjay1 joined #gluster
07:53 kdhananjay1 joined #gluster
07:55 kdhananjay joined #gluster
08:06 Manikandan joined #gluster
08:07 DV joined #gluster
08:13 anrao joined #gluster
08:25 stickyboy joined #gluster
08:25 papamoose joined #gluster
08:26 anrao joined #gluster
08:29 hybrid512 joined #gluster
08:47 ndarshan joined #gluster
08:50 kovshenin joined #gluster
08:59 R0ok_ joined #gluster
09:00 Norky joined #gluster
09:04 aravindavk joined #gluster
09:06 Slashman joined #gluster
09:11 NuxRo hi. how can I rotate glustershd.log without causing downtime to clients? what service needs to be HUP-ed?
09:11 NuxRo stuff in logrotate.d doesn't seem to take this file into account and it's grown a lot
09:11 ndevos NuxRo: I do not think HUP will work, logrotate should use the 'copytruncate' option?
09:14 NuxRo ndevos: there is no logrotate for this file
09:14 ndevos NuxRo: no? thats bad
09:14 NuxRo (v3.4.0)
09:16 tanuck joined #gluster
09:16 ndevos NuxRo: oh, in 3.5 its there, /var/log/glusterfs/*.log is the pattern, but it does not use copytruncate, it indeed just sends a HUP
09:16 liquidat joined #gluster
09:16 ndevos NuxRo: ah, right, the HUP should work for the glusterfs processes, but you need copytruncate for gfapi environments
09:20 o5k joined #gluster
09:20 hybrid512 joined #gluster
09:22 ndevos NuxRo: also, you just HUP this PID: /var/lib/glusterd/glustershd/run/glustershd.pid
09:22 o5k hello, I've made some research but i can't find a comparaison between GlusterFS ans object storage
09:22 hybrid512 joined #gluster
09:24 NuxRo ndevos: thanks
09:26 NuxRo ndevos: are brick processes also HUP-able?
09:27 kovshenin joined #gluster
09:29 hybrid512 joined #gluster
09:33 abyss^ I've done glusterfs rebalance (full with migration) and now I have very high load and customers complains about performance... It's normal behavior during rebalancing? :/
09:39 deniszh joined #gluster
09:41 ndevos NuxRo: yes, looks like it
09:42 NuxRo thanks ndevos
09:46 ne2k joined #gluster
09:50 ne2k Hi all. I'm just starting out with virtualization. I've installed proxmox on four nodes and would like to set up some shared storage so that I can do live migration. My plan was to use gluster in virt-store mode with the vm image replicated across all four nodes and then access it through gluster:localhost/blah.
09:51 ne2k I only have a single disk in each machine. first question is, do I have to create a separate partition for the gluster brick or can I just put it on the large data partition I already have?
09:54 ne2k I found this guide http://blog.cyberlynx.eu/2014/proxmox-ve-3-3-2-node-cluster-with-glusterfs/ which suggests making a separate partition, but I can't work out if this is really necessary, because a gluster brick is just a point in the filesystem, isn't it?
09:59 shubhendu joined #gluster
09:59 jiffin joined #gluster
10:02 LebedevRI joined #gluster
10:02 NuxRo ne2k: you can simply use a directory on your existing partition (which is recommended to be XFS formatted)
10:03 NuxRo ne2k: also, replication 4-way will create a lot of latency and increased chances of split brain
10:04 NuxRo perhaps try replica 2 and see how that goes
10:04 ne2k NuxRo: where can I read about split brain? I've heard the name and can kind of guess what it's about, but would liek to know more detail
10:05 NuxRo ne2k: there are several good google results, but try this
10:05 NuxRo http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
10:05 ne2k NuxRo: I've heard that for two-node replicas, DRBD is faster, but that gluster really comes into its own for larger replica sets
10:06 NuxRo ne2k: drbd is limited to 2 devices, gluster is not
10:07 7GHAAYHNJ joined #gluster
10:07 7YUAAFKZG joined #gluster
10:07 NuxRo large replica sets are possible of course, but think that every time you write something to it, it will do so on 4 separate machines
10:07 ne2k NuxRo exactly
10:07 NuxRo i hope you have 10 Gbps or infiniband at least
10:07 ne2k NuxRo: load is going to be pretty light
10:08 NuxRo ok, give it a go and see if it's fast enough for  you
10:08 ne2k I'm just trying to learn at the moment
10:09 NuxRo that's always good :)
10:09 ne2k NuxRo: regarding the original question, if my main data partition is currently ext4, would you recommend shrinking it and making a new xfs one for gluster?
10:09 ne2k or zfs even?
10:10 NuxRo XFS is the recommended FS, I think there were some issues with ext4 at some point but are likely fixed now, so depends how far you are willing to go
10:10 NuxRo no idea about ZFS
10:16 LebedevRI joined #gluster
10:16 LebedevRI joined #gluster
10:19 kovshenin joined #gluster
10:22 tanuck joined #gluster
10:23 kshlm joined #gluster
10:34 awerner joined #gluster
10:39 ndarshan joined #gluster
10:39 shubhendu joined #gluster
10:39 soumya_ joined #gluster
10:42 soumya joined #gluster
10:49 kbyrne joined #gluster
10:54 Norky joined #gluster
11:10 prasanth_ joined #gluster
11:13 ne2k NuxRo: it seems as though ceph is more integrated into proxmox, and might be easier to manage. any thoughts?
11:15 karnan joined #gluster
11:15 NuxRo ne2k: I have not tried it yet, but it's on my todo list as well (as it looks like openstack swift may end up being replaced by Ceph as object store)
11:16 NuxRo so looks promising, though you may want to join #ceph in this case :)
11:17 ne2k NuxRo: yeah, sure ;-)
11:17 ne2k thanks for your help today
11:18 NuxRo yw
11:19 anrao joined #gluster
11:19 nishanth joined #gluster
11:24 schandra joined #gluster
11:29 soumya joined #gluster
11:35 maveric_amitc_ joined #gluster
11:36 diegows joined #gluster
11:36 ndevos NuxRo: have you not heard of this yet? https://github.com/stackforge/swiftonfile#swift-on-file
11:41 firemanxbr joined #gluster
11:42 firemanxbr joined #gluster
11:54 Pupeno joined #gluster
12:13 NuxRo ndevos: I did not know about it. Do I need to install whole openstack/swift to use it?
12:14 itisravi joined #gluster
12:15 ppai joined #gluster
12:15 NuxRo looks like it, sux.. was hoping for a object storage gateway to my existing FTP setup, so people can use both protocols while working with essentially the same files
12:15 ndevos NuxRo: I have no idea, I dont know much about openstack (or swift for that matter)
12:20 ndevos NuxRo: I *think* you can use the swift component without the rest of openstack? at least it was possible to do so, using tempauth instead of keystone
12:23 rjoseph joined #gluster
12:23 hagarth NuxRo: you wouldn't need an entire installation of openstack for using SwiftonFile
12:26 NuxRo hagarth: do you know which bits exacly I would need?
12:27 hagarth NuxRo: ppai or tdasilva can help with precise details
12:27 hagarth ppai: ^^
12:27 ppai NuxRo, yes you can use swift without rest of openstack
12:29 NuxRo ppai:
12:29 NuxRo I don't want to use swift, but am looking for a swift gateway/proxy to existing storage :)
12:30 NuxRo do you guys know of anything like this?
12:31 ppai NuxRo, swiftonfile depends on swift..it does provide access to files in existing glusterfs volume
12:32 ppai NuxRo, however you'd need to install swift and swiftonfile package, hence it's not strictly just a frontend/gateway
12:32 NuxRo thanks
12:33 ira joined #gluster
12:35 sprachgenerator joined #gluster
12:36 harish joined #gluster
12:37 DV joined #gluster
12:53 harish joined #gluster
12:56 shaunm joined #gluster
12:56 nbalacha joined #gluster
12:59 lpabon joined #gluster
13:01 Slashman_ joined #gluster
13:01 firemanxbr joined #gluster
13:02 firemanxbr1 joined #gluster
13:07 DV joined #gluster
13:07 shaunm joined #gluster
13:08 aulait joined #gluster
13:11 o5k_ joined #gluster
13:15 meghanam joined #gluster
13:17 bernux joined #gluster
13:18 firemanxbr joined #gluster
13:21 meghanam joined #gluster
13:21 bernux Hi, JoeJulian. Have you been able to make your test (Millions of very small file) this week-end ?
13:23 anrao joined #gluster
13:23 Slashman joined #gluster
13:25 Norky hey bernux
13:26 Norky I don't think Joe normally gets on until a bit later
13:27 bernux oh ! ok, thanks Norky
13:35 anoopcs joined #gluster
13:37 ricky-ti1 joined #gluster
13:39 DV joined #gluster
13:47 wkf joined #gluster
13:47 glusterbot News from resolvedglusterbugs: [Bug 1193225] Architecture link broken <https://bugzilla.redhat.com/show_bug.cgi?id=1193225>
13:50 DV joined #gluster
13:53 B21956 joined #gluster
13:53 getup joined #gluster
14:00 fuknugget joined #gluster
14:00 fuknugget left #gluster
14:01 rjoseph joined #gluster
14:03 rwheeler joined #gluster
14:12 diegows joined #gluster
14:12 prasanth_ joined #gluster
14:13 virusuy joined #gluster
14:13 DV_ joined #gluster
14:14 B21956 joined #gluster
14:21 plarsen joined #gluster
14:24 bala joined #gluster
14:25 asku joined #gluster
14:32 Gill joined #gluster
14:34 dgandhi joined #gluster
14:35 dgandhi joined #gluster
14:45 snewpy joined #gluster
14:47 glusterbot News from newglusterbugs: [Bug 1192114] edge-triggered epoll breaks rpc-throttling <https://bugzilla.redhat.com/show_bug.cgi?id=1192114>
14:48 snewpy hi, I have some gluster volumes using rdma that work fine when mounted via fuse, but I cannot access them using libgfapi, with errors like "failed to get the port number for remote subvolume."
14:49 awerner joined #gluster
14:50 snewpy here is the output: http://pastebin.ca/2940070  gluster volume status shows that all the bricks are online
14:52 aravindavk joined #gluster
14:55 nbalacha joined #gluster
14:57 deepakcs joined #gluster
15:01 bennyturns joined #gluster
15:03 soumya joined #gluster
15:08 MrAbaddon joined #gluster
15:12 wushudoin joined #gluster
15:16 lmickh_ joined #gluster
15:22 ildefonso joined #gluster
15:26 coredump joined #gluster
15:33 T3 joined #gluster
15:35 DV joined #gluster
15:36 nishanth joined #gluster
15:37 wkf joined #gluster
15:38 wkf joined #gluster
15:41 wkf joined #gluster
15:45 ndevos snewpy: what version of glusterfs is that? 3.6.x should have some fixes for rdma
15:45 snewpy ndevos: i'm rebuilding qemu packages with 3.6 now so I can upgrade and try it out... it was 3.5
15:46 DV joined #gluster
15:47 ndevos snewpy: ah, yes, 3.5 does have some rdma patches in the review queue, they should have been applied in 3.6 already
15:47 snewpy ndevos: ok, i'll give it a try and report... other than libgfapi access it seems solid in 3.5
15:49 ndevos snewpy: cool, 3.5 should be pretty stable indeed, and hopefully I get some people to review those rdma changes - they're in the queue for a while already :-/
15:51 lalatenduM joined #gluster
15:52 hagarth joined #gluster
15:53 rwheeler joined #gluster
15:57 snewpy ndevos: cool, thanks for the info, I'll test out 3.6 now and let you know
15:57 wkf joined #gluster
15:57 T0aD joined #gluster
15:58 shubhendu joined #gluster
16:09 bfoster joined #gluster
16:09 DV_ joined #gluster
16:13 tetreis joined #gluster
16:20 deniszh1 joined #gluster
16:25 kshlm joined #gluster
16:36 coredump joined #gluster
16:37 hagarth joined #gluster
16:44 gem joined #gluster
16:48 tanuck joined #gluster
16:50 wkf joined #gluster
16:50 wkf joined #gluster
16:51 deepakcs joined #gluster
16:51 jbrooks joined #gluster
16:53 _Bryan_ joined #gluster
17:00 kshlm joined #gluster
17:02 ron-slc_ joined #gluster
17:03 jiffin joined #gluster
17:04 _dist joined #gluster
17:09 snewpy ndevos: still no go, but the error message is much more concise now http://pastebin.com/sRihbaT1
17:09 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:10 snewpy http://fpaste.org/189328/14247114/
17:21 PeterA joined #gluster
17:25 MacWinner joined #gluster
17:30 nmbr joined #gluster
17:37 codex joined #gluster
17:57 hchiramm_ joined #gluster
17:58 hchiramm__ joined #gluster
18:00 georgeh-LT2 joined #gluster
18:00 jiffin joined #gluster
18:08 Rapture joined #gluster
18:10 elico joined #gluster
18:14 PeterA i copied a www/html mounts with images from NFS appliance to gluster
18:14 PeterA and whichever client mounts the gluster over NFS got a 50% more busy on system load
18:14 PeterA clients are all httpd
18:14 PeterA any clue how gluster nfs makes the clients much more busy?
18:16 PeterA seems like the df is slower on sl2
18:25 lalatenduM joined #gluster
18:29 MacWinner joined #gluster
18:39 ira joined #gluster
18:44 bennyturns joined #gluster
18:51 dbruhn joined #gluster
18:54 chirino joined #gluster
19:11 DV_ joined #gluster
19:19 sputnik13 joined #gluster
19:20 MacWinner joined #gluster
19:20 MacWinner joined #gluster
19:22 MacWinner joined #gluster
19:23 ndevos snewpy: could that be the same issue as http://www.gluster.org/pipermail/gluster-users/2015-February/020781.html ?
19:24 kkeithley1 joined #gluster
19:30 ckotil I've got two apache instances writing to the same access log file on a replicated 2 brick gluster. I can't read the file, and I see a lot of failed auto heal attempts in the gluster debug logs. Is there a way around this, or should I just move the log file out of the replicated glusterFS?
19:31 obnox hi all. on the documentation on the gluster website, there are broken links (encountered at least one).
19:31 obnox where can I report that / submit patches?
19:32 plarsen joined #gluster
19:32 ndevos hi obnox!
19:32 ndevos please use https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&amp;component=website for that
19:32 ndevos or, send an email to gluster-infra@gluster.org
19:34 ndevos ckotil: two processes that write to the log at the same time? I guess that can lead to corruption, most applications do not use locks for writing logs files
19:34 ckotil that's what it seems like
19:34 obnox ndevos: Hi Niels
19:35 ckotil I'd be super impressed if it just worked :)
19:35 ndevos ckotil: you probably should have the write to their own logs :)
19:35 ndevos s/the/them/
19:35 ckotil ya, that's what Im gonna do
19:35 ckotil thanks
19:35 glusterbot What ndevos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
19:35 * ndevos *cough*
19:36 obnox ndevos: I have the impression that the docs on the website are being transitioned from the community docs site to the common layout
19:36 obnox is there a git repo for the docs?
19:36 obnox (docs/website)
19:37 ndevos obnox: I dont really know how they do it... thw website is a mystery for me
19:37 obnox ok :)
19:38 ndevos obnox: but, there are some docs in the main glusterfs repo, https://github.com/gluster/glusterfs/tree/master/doc/
19:39 ndevos obnox: if you mean those docs, then you need to follow the http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow
19:40 obnox ndevos: thanks! bbl
19:40 misc I can explain for the website
19:40 * ndevos did not have dinner yet, and will be gone for the day
19:41 ndevos misc: ah, point obnox to the git repo please :)
19:41 ndevos and, maybe have it docomented in the wiki?
19:41 ndevos if its not there...
19:41 misc ndevos: I think it is not on the wiki (yet)
19:42 misc but that's like the salt installation, in the queue :(
19:42 ndevos misc: yeah, and that makes it difficult to get contributions :-/
19:42 ndevos misc: did you talk to firemanxbr about Gerrit yet?
19:43 misc ndevos: nope, but I have no access, that's justin who is the boss on this part
19:43 misc obnox: so the website is on git://forge.gluster.org/gluster-site/gluster-site.git
19:43 misc obnox: any commit is autodeployed on the website, unless there is a bug ( in which case, ping me )
19:43 ndevos misc: ah, right, JustinClift gave him permissions - but you should tell him what you need for ansible/salt/whatever
19:44 misc ndevos: does "time" count as something I can ask :) ?
19:44 ndevos misc: he's giving you guys that already by looking into the Gerrir update :)
19:44 ndevos Gerrit even
19:45 misc ndevos: well, he is giving that to justin :p
19:45 ndevos misc: then you get Justin to document it for you?
19:46 * ndevos really drops off now, cya!
19:47 uebera|| joined #gluster
19:47 misc ndevos: I will try to document stuff tonight
19:48 glusterbot News from newglusterbugs: [Bug 1195415] glusterfsd core dumps when cleanup and socket disconnect routines race <https://bugzilla.redhat.com/show_bug.cgi?id=1195415>
19:49 ndevos misc++ thanks!
19:49 glusterbot ndevos: misc's karma is now 1
19:53 firemanxbr ndevos, one second, I'm meeting :(
19:57 misc so first quick doc : https://www.gluster.org/community/documentation/index.php/Website_update
20:08 JoeJulian @learn website as To contribute or correct gluster.org, please see https://www.gluster.org/community/documentation/index.php/Website_update
20:08 glusterbot JoeJulian: The operation succeeded.
20:18 misc damn, now I have the pressure :(
20:37 JoeJulian bernux: you asked if it would seem reasonable to me to put your tiles in a distribute volume as opposed to a replicated one. That depends on your SLA/OLA requirements and your other administrative needs.
20:38 JoeJulian For any end user facing infrastructure, I would tend toward fault tolerance so I could do maintenance without losing operation.
20:39 JoeJulian For your data set, it might be worthwhile to have a wider distribute volume, still with a replica 2.
20:42 bernux Actually I'm with replica 2, I'm going to try to resume my needs
20:43 obnox ndevos: misc: thanks for the hints
20:45 bernux I want to be able to put HA on a file server with 2 share
20:46 bernux 1 share with totally dynamic files which need to be in common with webserver
20:48 bernux the other share is 90 to 95% static files that are generated by us on an other environment and the rest which are generated on the fly by the navigation on our website
20:48 bernux this dynamic file can be loose or not present on the 2 file server
20:49 bernux because if they are not here they will be generated on the server where they are not
20:50 bernux present
20:51 bernux don't know if it's clear
20:51 JoeJulian bernux: The problem I foresee is that even with your dynamic files, if a brick is missing and a filename hash should be created on that brick, it will fail.
20:52 JoeJulian You would have to remove-brick to do maintenance, then add-brick when you're done.
20:52 obnox misc: where to send patches for the gluster-docs-project repo?
20:52 doekia joined #gluster
20:53 lalatenduM joined #gluster
20:53 misc obnox: I have no idea of the workflow on gitorious :/
20:54 JoeJulian Same as github. Clone https://forge.gluster.org/gluster-docs-project , commit and issue a merge request.
20:54 obnox misc: ok, i guess i'll simply register on the forge and create a merge request
20:54 misc as JoeJulian say :)
20:54 obnox JoeJulian: oh, you already said that. :)
20:55 obnox thanks guys
20:55 JoeJulian hehe
20:56 JoeJulian I really love having 4 or 5 different places to contribute documentation.
20:56 misc tigert did some proposal on cleaning it
20:58 JoeJulian Yep, 4th or 5th documentation language change, too. Seems like every attempt at improving documentation goes through a 20% - 80% rewrite before being abandoned.
21:00 misc I do not know about the previous ones, but if you could elaborate, that would be interesting
21:00 JoeJulian Not sure if there's a good solution to that, though. If someone wants to spend their time trying to make things better, I'm certainly not going to try talking them out of it.
21:01 misc well, we can also listen to you as the wise old man who seen the previous attempts :)
21:01 lalatenduM misc, JoeJulian I kind of missed the project https://forge.gluster.org/gluster-docs-project/, what is about?
21:01 JoeJulian Sysadmins are busy. If you want them to contribute to a documentation change, it really shouldn't take them more than 5 minutes to make that contribution.
21:02 JoeJulian lalatenduM: I'm not even sure.
21:02 bernux JoeJulian: I must have missed something I don't understand why a filename hash  would be created on a missing brick
21:02 misc lalatenduM: IIRC, that's the documentation that was supposed to be canonical
21:03 misc but not knowing is also part of the problem :/
21:03 lalatenduM misc, JoeJulian interesting :)
21:04 lalatenduM btw we are getting doc pull requests in github , but we dont follow the workflow
21:04 JoeJulian bernux: When you create a file, the name of the file is hashed. That hash is compared to a dht map for the directory in which it's being placed. If there's a gap in that hash map like when a distribute only brick is down for maintenance or due to failure, there'll be a gap in that hash map. The file cannot be created because it /might/ already exist on that missing brick, so creation will fail.
21:06 lalatenduM here is my attempt to inform that we dont use github pull requests http://review.gluster.org/#/c/9727/
21:07 misc so speaking of sysadmin, i was wondering how people would feel about having ldap for the auth on servers
21:08 JoeJulian Yeah, it sucks. We want the devs to document their own enhancements, but they suck at writing documentation. We need users to be able to update it but they don't have the time to go through the review process.
21:08 obnox lalatenduM: hi
21:08 JoeJulian misc: I would prefer some sort of plugin auth.
21:08 lalatenduM obnox, hey
21:08 obnox lalatenduM: (this is michael from samba, we were talking over dinner in brno)
21:09 lalatenduM obnox, ohh cool, how r u doing?
21:09 JoeJulian ldap's a huge learning curve for a young admin.
21:09 obnox doing fine
21:10 misc JoeJulian: of plugin auth ?
21:10 obnox lalatenduM: almost have my vagrant setup ready to create a complete gluster/ctdb/samba cluster in libvirt/kvm with a single command
21:10 misc ( i am speaking of the server of the infrastructure, to be clear )
21:11 lalatenduM obnox, awesome :), are you planning to blog about it
21:11 lalatenduM ?
21:11 obnox absolutely
21:11 lalatenduM cool
21:11 obnox blog.obnox.de
21:11 JoeJulian Oh, you're talking infra, not dev. Got it. Yeah, ldap would make administration easier.
21:11 obnox lalatenduM: there are a few prerequisit things about getting vagrant into fedora in the first place
21:11 lalatenduM btw we need documentation around CTDB + glusterfs+ samba for community
21:12 obnox yeah
21:12 lalatenduM obnox, are you using vagrant+ puppet-gluster?
21:12 obnox lalatenduM: no. did everthing myself. because I also want/need to understand the stuff myself
21:13 lalatenduM obnox, yup, understand
21:13 obnox lalatenduM: using shell scripts for a start because all that puppet / ansible / chef whatnot takes some time to grasp also
21:13 lalatenduM obnox, right, looking forward to read your blog , it can be syndicated through gluster blogs
21:14 obnox ok, cool
21:14 JoeJulian I don't really like vagrant. The "box" format is too virt-backend dependent.
21:14 lalatenduM JoeJulian, it is mostly for developer oriented I think
21:15 JoeJulian Oh, I use it every day... I know.
21:15 obnox JoeJulian: what is better in your opinion?
21:15 lalatenduM JoeJulian, :)
21:15 bernux JoeJulian: so there is no solution except trying to find if my very slow performance with the write of very small file in replica 2 mode is tuning/hardware problem
21:16 JoeJulian But it would be nice, as a team member of people that use it for our development daily, if there was a standard that could be applied whether you're using that crappy virtualbox or libvirt or openstack.
21:16 obnox JoeJulian: sure, the box format is provider-dependent but well, it is better than anything else I have seen
21:17 obnox for the purpose of quickly and reproducibly creating vms / containers. and clusters / groups of them
21:17 JoeJulian Like instead of a specific image type, instead it was just a cpio that could be applied to whatever new blank.
21:18 lalatenduM obnox, btw I had wrote two blogs about samba vfs plugin and glusterfs and almost 15 hits daily on an avg (from last one year or so) . I think a write up on ctdb would be nice
21:18 kminooie joined #gluster
21:18 DV__ joined #gluster
21:19 JoeJulian But yeah, it is what it is and so I have to reboot fedora every once-in-a-while when the virtualbox kernel module pukes. C'est la vie.
21:20 JoeJulian lalatenduM: I referenced that page a lot.
21:20 lalatenduM JoeJulian, nice :)
21:21 deniszh joined #gluster
21:21 lalatenduM JoeJulian, do u also use ctdb?
21:21 JoeJulian I'm changing the donate button on my blog as soon as I get the info from bradner labs. Instead of buying me coffee, I'm going to make it a "Thank me by donating to open source cancer research" button.
21:22 JoeJulian And no, I haven't used ctdb.
21:22 lalatenduM haha
21:22 lalatenduM JoeJulian, I think you have moved to debian, right?
21:22 JoeJulian centos->debian->arch
21:23 JoeJulian But I'm not doing gluster at work anymore.
21:23 JoeJulian We're doing ceph now.
21:23 lalatenduM JoeJulian, ahh
21:23 obnox JoeJulian: maybe use libvirt+kvm ? :)
21:24 JoeJulian We're not in production with the ceph configuration, yet. I'm sure that'll present a whole new set of bugs.
21:24 lalatenduM JoeJulian,  I can't control my self without asking , what do you think ? till now your experience with gluster vs ceph
21:24 deniszh joined #gluster
21:24 JoeJulian obnox: I can't. Most of the rest of the team uses osx for their dev work.
21:24 obnox JoeJulian: urgh :-o
21:25 JoeJulian Ceph is a lot more complicated to deploy, obviously.
21:25 T3 joined #gluster
21:25 lalatenduM yup , I agree
21:25 lalatenduM I tried one
21:25 JoeJulian It's slower and has some potential bottlenecks.
21:25 obnox lalatenduM: where is yout blog?
21:25 obnox yout
21:25 obnox your
21:25 obnox :)
21:26 lalatenduM https://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
21:26 lalatenduM https://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
21:26 kminooie JoeJulian: so why are you guys moving to ceph?
21:26 JoeJulian Since it's being used for VM images, I don't care quite as much that it's not whole files, a major requirement when I found my way here in the first place.
21:26 lalatenduM @sambavfsplugin
21:26 obnox lalatenduM: btw, I recently update samba's vfs_glusterfs manpage.
21:27 JoeJulian But the one thing I do like is that I don't have to lose redundancy. If a storage device/node fails, everything gets re-replicated elsewhere automatically.
21:27 JoeJulian kminooie: Mostly because rebalance doesn't work.
21:28 lalatenduM obnox, cool , what are the changes?
21:29 obnox lalatenduM: fix some outdated stuff and explain some more about path and not needing a fuse mount due to using libgfapi etc
21:29 JoeJulian With my previous employer, we never needed rebalance. We had a pretty small installation and when we needed more space it was easier to just replace disks.
21:29 obnox especially use of path is important since it is non-standard
21:29 lalatenduM @sambavfs
21:29 glusterbot lalatenduM: http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
21:30 JoeJulian With this newer gig, when we're already at 24PB and want to add another 12, rebalance kind-of has to work.
21:30 lalatenduM obnox, right
21:31 obnox lalatenduM: i don't have a built version of the manpage ready on the web atm. but it is in samba master
21:31 lalatenduM obnox, ok cool
21:33 JoeJulian I'm still hoping for an opportunity to move back to gluster, but there's a couple of features that need to be production ready, memory management needs to have an overhaul, and rebalance needs fixed.
21:34 lalatenduM JoeJulian, I agree
21:34 * lalatenduM gotta go. Need to sleep now :)
21:34 lalatenduM cys
21:34 lalatenduM cya*
21:35 JoeJulian I think NSR fixes my concern with replication, but it needs another 6 months of use before I'll be comfortable recommending it for our production use.
21:35 JoeJulian G'night lalatenduM
21:35 obnox lalatenduM: cu
21:35 firemanxbr joined #gluster
21:36 firemanxbr joined #gluster
21:39 kminooie does anyone know what the website behind forge.gluster.org is? is it gitlab?
21:45 firemanxbr joined #gluster
21:47 JoeJulian kminooie: Gitorious
21:47 firemanxbr ndevos, I'm back :D
21:47 kminooie thank you :)
21:47 firemanxbr sorry my delay, I was in meeting :P
21:58 badone_ joined #gluster
22:02 bernux joined #gluster
22:11 dbruhn left #gluster
22:35 _Bryan_ joined #gluster
22:37 partner i agree with Joe, most of my issues with the gluster has been the rebalance, memory issues from it and the fact i have very little control over the bricks in situations where i'd like to do any operations to them (without killing and loosing redundancy)
22:38 partner 33 days was the measured time for fix-layout, some 250 days was estimated for the rebalance
22:39 partner thought rebalance could not be performed due to memory leaks.
22:40 partner nevertheless its been perfect when the issues (for that particular version/series) are known
22:41 partner i have had very little issues since the early days and been on production for 2+ years, couple of client side problems but considering the amount of data being moved back and forth constantly that does not count much
22:42 tlynchpin joined #gluster
22:43 partner i hate to leave it but as earlier said i've been moved to work on different things on a different team that does not involve gluster. but we do have ceph :)
22:51 gildub joined #gluster
22:51 B21956 left #gluster
22:52 semiosis 3.6.2 failed to build on ubuntu precise.  buildlog here https://launchpadlibrarian.net/198231851/buildlog_ubuntu-precise-amd64.glusterfs_3.6.2-ubuntu1~precise2_FAILEDTOBUILD.txt.gz
22:52 semiosis any ideas?
22:52 semiosis should I file a bug about this?
22:52 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
23:00 JoeJulian semiosis: That comes from bug 1166515
23:00 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1166515 medium, medium, 3.6.2, bugs, CLOSED CURRENTRELEASE, [Tracker] RDMA support in glusterfs
23:00 JoeJulian fwiw
23:01 JoeJulian commit 707ef16edcf4b14f46bb515b3464fa4368ce9b7c
23:01 semiosis JoeJulian: 3.6.2 builds ok on trusty, and 3.6.1 builds ok on precise, but not 3.6.2 on precise.  does that make sense to you?
23:02 semiosis i know nothing about this issue
23:02 JoeJulian Looks like it must have to do with the rdma include version
23:03 semiosis interesting
23:03 semiosis thx
23:04 JoeJulian /usr/include/rdma/rdma_user_cm.h
23:04 JoeJulian which is a kernel header
23:06 msmith joined #gluster
23:06 Pupeno_ joined #gluster
23:12 JoeJulian Added in kernel commit a9bb79128aa659f97b774b97c9bb1bdc74444595
23:12 JoeJulian First kernel tag that contains that commit is v3.0
23:14 JoeJulian Which brings us to... I have no f'ing clue why that's failing to compile.
23:19 lnr joined #gluster
23:22 semiosis lol, ok thanks
23:33 kminooie ok I am having 2 issues that may or may not be related to each other. the first on is that I am getting a lot of  warning on a failed socket   http://ur1.ca/jsccx
23:34 kminooie I found this on the mailing list but it is not saying much http://www.gluster.org/pipermail/gluster-users/2014-December/019727.html
23:34 kminooie I am on 3.6.2 btw
23:35 kminooie the 2nd on is that heal fails  http://ur1.ca/jscdl    I check the underlying brick and they seem to be ok
23:35 JoeJulian Do you have nfs disabled?
23:36 kminooie yeah about that, I saw that on the mailing list too, what do you mean?
23:36 kminooie I am mounting these volume via nfs
23:36 kkeithley_ JoeJulian: not building on which?
23:36 lnr left #gluster
23:36 JoeJulian ok, not that then.
23:37 kkeithley_ wheezy?
23:37 JoeJulian kkeithley_: [14:52] <semiosis> 3.6.2 failed to build on ubuntu precise.  buildlog here https://launchpadlibrarian.net/198231851/buildlog_ubuntu-precise-amd64.glusterfs_3.6.2-ubuntu1~precise2_FAILEDTOBUILD.txt.gz
23:38 kkeithley_ oh, sorry, was looking at the wrong thing
23:38 kkeithley_ debian vms
23:39 kminooie as i was saying ( for the heal fails one ) the underlying bricks seem to be ok ( total size match on both peer ) and I don't have any problem mounting and using the volume. I also found this thread  http://www.gluster.org/pipermail/gluster-users/2015-February/020558.html
23:40 brad[] joined #gluster
23:40 kminooie I did upgrade from 3.2 to 3.6.2 which according to ^^^ thread breaks something, but they don't say what and how to fix it
23:40 brad[] jbrooks: Hey, the pictures are broken on your "Up and Running with oVirt 3.5" blog post
23:40 brad[] (Which is otherwise an excellent post!)
23:45 tlynchpin how to change bricks from simple hostname to fqdn?
23:45 kripper joined #gluster
23:47 tlynchpin i have several replica 3 vols on 3 servers except one vol is using bricks host1:/path instead of the other vols using host1.fq.dn:/path
23:50 JoeJulian tlynchpin: The simplest way is to stop all your volumes, shut down glusterd. pkill -f glustershd; pkill -f glusterfs/nfs.log . Then just use sed -i to replace the hostname under /var/lib/glusterd on all your servers.
23:51 kkeithley_ JoeJulian: I think RDMA_OPTION_ID_REUSEADDR should come from <rdma/rdma_cma.h>  (not <rdma/rdma_user_cm.h>, which comes from librdmacm-dev.
23:52 kripper Hi, I got a split brain on a replica-2 cluster where only one host is writing. Is this normal?
23:54 tlynchpin JoeJulian: thanks it looks that easy.
23:55 JoeJulian kripper: split-brain is not "normal" unless something is wrong (typically network).
23:56 JoeJulian Not sure what "only one host is writing" means, though.
23:56 JoeJulian @glossary
23:56 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
23:58 kripper One VM went down producing the split-brain, but the storage is only been written from one host (the host running the VM).
23:58 kkeithley_ rdma_cma.h comes from librdmacm-dev.    (rdma_user_cm.h comes from linux-libc-dev) but if you didn't have librdmacm-dev I'd think it'd be complaining about not finding <rdma/rdma_cma.h> instead of RDMA_USER_ID_REUSEADDR not being defined
23:59 kripper Is it normal to have a split-brain even when the storage is written only on one host?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary