Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:18 qdk_ joined #gluster
00:19 atrius joined #gluster
00:29 jason__ joined #gluster
00:29 RicardoSSP joined #gluster
00:40 sjm joined #gluster
01:01 k3rmat joined #gluster
01:13 sputnik13 joined #gluster
01:13 k3rmat joined #gluster
01:21 Paul-C joined #gluster
01:22 harish joined #gluster
01:29 sjm joined #gluster
01:37 andrewklau joined #gluster
01:54 mjsmith2 joined #gluster
01:58 jruggiero joined #gluster
01:59 jrcresawn_ joined #gluster
01:59 jruggiero left #gluster
02:16 bchilds joined #gluster
02:19 jag3773 joined #gluster
02:27 rastar joined #gluster
02:29 bala joined #gluster
02:34 kkeithley joined #gluster
02:45 Matthaeus joined #gluster
02:47 hagarth joined #gluster
02:53 bharata-rao joined #gluster
03:06 MacWinner joined #gluster
03:08 hchiramm__ joined #gluster
03:09 Ramereth joined #gluster
03:11 spandit joined #gluster
03:28 haomaiwa_ joined #gluster
03:30 haomaiw__ joined #gluster
03:33 haomaiwa_ joined #gluster
03:33 gildub joined #gluster
03:37 itisravi joined #gluster
03:44 hchiramm__ joined #gluster
03:52 RameshN joined #gluster
03:56 Paul-C joined #gluster
04:00 XpineX_ joined #gluster
04:06 kanagaraj joined #gluster
04:07 chirino_m joined #gluster
04:10 ndarshan joined #gluster
04:11 kshlm joined #gluster
04:30 nbalachandran joined #gluster
04:30 dusmant joined #gluster
04:33 zero_ark joined #gluster
04:34 haomaiwa_ joined #gluster
04:37 rastar joined #gluster
04:45 rjoseph joined #gluster
04:46 hchiramm_ joined #gluster
05:03 kdhananjay joined #gluster
05:05 aravindavk joined #gluster
05:08 prasanthp joined #gluster
05:18 shubhendu joined #gluster
05:20 RameshN joined #gluster
05:21 kumar joined #gluster
05:22 lalatenduM joined #gluster
05:22 davinder8 joined #gluster
05:28 psharma joined #gluster
05:39 hagarth joined #gluster
05:40 shubhendu joined #gluster
05:42 nshaikh joined #gluster
05:44 rastar joined #gluster
05:44 raghu joined #gluster
05:49 aravindavk joined #gluster
05:53 rjoseph joined #gluster
05:54 ppai joined #gluster
06:01 dusmant joined #gluster
06:01 Ramereth joined #gluster
06:01 Ramereth joined #gluster
06:04 kshlm joined #gluster
06:09 ekuric joined #gluster
06:14 n0de joined #gluster
06:15 bala1 joined #gluster
06:27 meghanam joined #gluster
06:27 meghanam_ joined #gluster
06:32 shubhendu joined #gluster
06:40 ricky-ti1 joined #gluster
06:41 aravindavk joined #gluster
06:43 rjoseph joined #gluster
06:45 dusmant joined #gluster
06:57 ProT-0-TypE joined #gluster
06:59 ctria joined #gluster
07:01 rgustafs joined #gluster
07:02 fsimonce joined #gluster
07:08 vimal joined #gluster
07:08 eseyman joined #gluster
07:27 Matthaeus joined #gluster
07:30 Thilam joined #gluster
07:35 liquidat joined #gluster
07:37 VerboEse joined #gluster
07:44 VerboEse joined #gluster
07:44 XpineX__ joined #gluster
07:44 VerboEse joined #gluster
07:50 ktosiek joined #gluster
07:51 shubhendu joined #gluster
07:51 haomaiwa_ joined #gluster
07:53 rastar joined #gluster
07:56 nishanth joined #gluster
07:58 arcimboldo joined #gluster
08:04 Norky joined #gluster
08:29 hagarth joined #gluster
08:36 marbu joined #gluster
08:47 RameshN joined #gluster
08:48 capri is there any gluster volume limit available or could i create as many as i want to? for example maybe 50-100
08:54 arcimboldo I'm trying to configure gluster 3.5.0 to provide Cinder volumes on an Icehouse installation, but "attaching" a volume hungs forever.
08:54 arcimboldo Volume stays in "attaching"
08:55 arcimboldo anyone successfully configured gluster for cinder/Icehouse?
08:56 glusterbot New news from newglusterbugs: [Bug 1107984] Dist-geo-rep : Glusterd crashed while resetting use-tarssh config option in geo-rep. <https://bugzilla.redhat.com/show_bug.cgi?id=1107984>
08:57 nishanth joined #gluster
09:09 d3vz3r0 joined #gluster
09:10 ndevos capri: there is no real limit that you will hit soon, management of many volumes (by glusterd) would likely be the first issue that you hit
09:10 ndevos arcimboldo: have you configured the volume as explained in http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt ?
09:10 glusterbot Title: Libgfapi with qemu libvirt - GlusterDocumentation (at www.gluster.org)
09:17 Slashman joined #gluster
09:19 arcimboldo ndevos, I may have used the wrong uid/gid and put the one for cinder.
09:19 arcimboldo Also, is it still the case that I need to restart glusterd after changing the uid/gid? I'm using gluster from the ppa:semiosis repository
09:20 ndevos arcimboldo: I just added the note that the UID could be different, but I think you need the UID/GID from the process that runs the VM
09:20 arcimboldo I'll try now, thanks for now :)
09:21 ndevos arcimboldo: restarting glusterd should not be needed, but you should stop/start the volume when you set the server.allow-insecure option
09:21 ndevos and note that "stop/start the volume" is not the same as restarting the server or the glusterfsd processes ;)
09:23 kanagaraj joined #gluster
09:23 arcimboldo ooops
09:23 dusmant joined #gluster
09:24 arcimboldo can I stop the volume while it's mounted from the cinder server?
09:25 RameshN joined #gluster
09:28 arcimboldo still no luck, I will try to delete instance and volume and restart cinder-volume.
09:28 ndarshan joined #gluster
09:29 kanagaraj joined #gluster
09:31 nbalachandran joined #gluster
09:32 kanagaraj_ joined #gluster
09:42 ninkotech joined #gluster
09:42 ninkotech_ joined #gluster
09:44 harish joined #gluster
09:44 ppai joined #gluster
09:46 RameshN joined #gluster
09:49 LebedevRI joined #gluster
09:55 lalatenduM joined #gluster
09:56 dusmant joined #gluster
10:06 bharata_ joined #gluster
10:18 nishanth joined #gluster
10:19 edward2 joined #gluster
10:23 harish_ joined #gluster
10:25 hagarth joined #gluster
10:25 dusmant joined #gluster
10:34 RameshN joined #gluster
10:38 avati joined #gluster
10:39 shyam joined #gluster
10:40 a2_ joined #gluster
10:41 deepakcs joined #gluster
10:43 ppai joined #gluster
10:50 nbalachandran joined #gluster
10:53 partner uuh, i took replication off from a dist-repl volume and now my .glusterfs is sized of that brick..
10:53 partner also seeing duplicate files here and there
10:55 partner 3.4.2 version on debian wheezy.. i test run this on smaller env and it went perfectly. on prod this did not cause any errors either but the end result is a bit unwanted
10:57 glusterbot New news from newglusterbugs: [Bug 922801] Gluster not resolving hosts with IPv6 only lookups <https://bugzilla.redhat.com/show_bug.cgi?id=922801>
11:03 morse joined #gluster
11:05 bene2 joined #gluster
11:06 partner while everything seems to be working still what would be the means to remove the duplicate entries, preferably online..?
11:07 partner several posrs about the issue on mailing list but most without resolution, some with deleting the whole volume and hopefully something i've yet to find
11:10 partner if i go and issue "rm somefile" i see the duplicate going away. if i issue same command again i see the remaining file going away
11:13 partner i wonder where the files are, on every one of the four servers the .glusterfs directory is just about the size of the brick
11:14 dusmant joined #gluster
11:14 lalatenduM joined #gluster
11:15 capri joined #gluster
11:15 edward2 joined #gluster
11:18 prasanthp joined #gluster
11:19 ppai joined #gluster
11:21 stickyboy joined #gluster
11:21 shubhendu joined #gluster
11:23 kanagaraj_ joined #gluster
11:25 Norky joined #gluster
11:25 nishanth joined #gluster
11:27 juhaj_ joined #gluster
11:27 partner just testing around, editing a duplicate text file and saving the change makes the file and changes disappear
11:32 twx joined #gluster
11:32 edong23 joined #gluster
11:32 n0de__ joined #gluster
11:33 prasanthp joined #gluster
11:35 mkzero joined #gluster
11:35 siel joined #gluster
11:37 fsimonce joined #gluster
11:40 twx joined #gluster
11:41 diegows joined #gluster
11:42 siel joined #gluster
11:45 partner or are they supposed to stay there forever even if not mined further..
11:46 partner i've ran a cleanup now and duplicates are gone
11:47 rameez joined #gluster
11:48 rameez Hello Team..
11:48 rameez have one doubt regarding geo replication.
11:48 rameez does geo-replication support bi-directional replication?
11:49 partner nevermind the latter, thinking out loud but removing the duplicates did free up the space from .glusterfs, or at least that's where du reports the space being used
11:50 partner 3.4.2 seems to incorrectly report rebalance status :/
11:50 Slashman joined #gluster
11:54 tdasilva left #gluster
11:54 mwoodson_ joined #gluster
11:55 capri joined #gluster
11:55 lyang0 joined #gluster
11:56 overclk joined #gluster
12:02 itisravi_ joined #gluster
12:06 nullck joined #gluster
12:08 the-me joined #gluster
12:09 jag3773 joined #gluster
12:11 capri joined #gluster
12:14 edong23 joined #gluster
12:17 bene2 joined #gluster
12:19 sroy joined #gluster
12:20 rnz joined #gluster
12:22 keytab joined #gluster
12:26 kanagaraj joined #gluster
12:29 arcimboldo ndevos, FYI: changing the owner-uid/gid, restarting the *volume* and after a few fix on the compute nodes, it worked!
12:30 ndevos arcimboldo: thanks for confirming!
12:31 arcimboldo do you have experience with openstack and gluster?
12:31 arcimboldo I couldn't find much documentation, I wonder who is actually using it in production
12:32 partner we are
12:33 * ndevos doesn't have experience with it, only knows a little how it technically should work
12:35 arcimboldo partner, do you mind sharing a few specs/experience with it? We are evaluating ceph and gluster for our next storage
12:36 arcimboldo I've found a lot of people is using ceph, but I like the kiss approach of gluster more, especially if something goes very wrong, being able to access the data on the bricks makes me more comfortable...
12:37 decimoe joined #gluster
12:42 japuzzo joined #gluster
12:45 sjm joined #gluster
12:46 partner arcimboldo: i don't think we have anything special set up, using it for glance and cinder
12:47 theron joined #gluster
12:47 partner one volume for each
12:49 arcimboldo Do you know if it's possible to use by default gluster:/ images or I need to have a gluster volume mounted on /var/lib/nova/instances ? I don't have local disk on a lot of nodes...
12:49 arcimboldo partner, how many storage nodes you have? How many cinder volumes are created?
12:52 VeggieMeat joined #gluster
12:52 samkottler joined #gluster
12:53 partner arcimboldo: its really small currently, one storage node and currently some dozen of cinder volumes so i can't say anything about any big env performance at this point
12:56 lalatenduM joined #gluster
12:57 partner thought i'm tempted to try it out on our larger gluster setup of 12 storage nodes and see how the performance goes there
12:58 d-fence joined #gluster
12:59 theron joined #gluster
12:59 dusmant joined #gluster
13:00 chirino joined #gluster
13:01 bet_ joined #gluster
13:04 qdk_ joined #gluster
13:05 hagarth joined #gluster
13:06 sac`away joined #gluster
13:06 tdasilva joined #gluster
13:10 ctria joined #gluster
13:11 hchiramm_ joined #gluster
13:11 arcimboldo uhm, I was stressing my VM, backed up by a glusterfs (fuse), and the VM disk went read-only...
13:14 shyam joined #gluster
13:18 lezo joined #gluster
13:20 jskinner_ joined #gluster
13:21 jmarley joined #gluster
13:25 arcimboldo I have a VM running with a disk on a glusterfs. After heavy-load, the disk become "read-only", and it's still read-only also after reboot. How can I know if it's gluster or libvirt which is making it ro?
13:27 flowouffff Hi Guys
13:29 flowouffff Quick question: gluster-swift and swift3 (amazon S3 middleware) are compatible ???
13:30 julim joined #gluster
13:31 tdasilva flowoufff: yes, but...gluster-swift is being renamed swiftonfile
13:31 brad_mssw joined #gluster
13:32 tdasilva so you should check that out
13:34 mjsmith2 joined #gluster
13:36 flowouffff yeah you're right
13:36 flowouffff i should have said swiftonfile and not gluser-swift
13:36 flowouffff i just installed the last version from github
13:37 flowouffff which works like a charm
13:37 flowouffff but i wanted to plug in the S3 API on top of swiftonfile
13:37 tdasilva flowoufff: nice! we are working to transition swiftonfile to be a Storage Policy
13:38 flowouffff Ok good news :)
13:39 tdasilva flowouffff: we tested swift3 a few weeks ago and it worked pretty well. there were a couple of APIs that did not work, but I think once swiftonfile becomes a SP of a vanilla swift cluster, they should be fine
13:39 tdasilva the APIs that didn't work were: GET Bucket (List Objects), PUT Object (Copy)
13:40 tdasilva flowouffff: can you share how you are planning to use swiftonfile?
13:41 flowouffff yeah i can share that with u in MP
13:41 daMaestro joined #gluster
13:45 harish_ joined #gluster
13:45 chirino_m joined #gluster
13:46 kkeithley joined #gluster
13:47 hchiramm_ joined #gluster
13:47 LebedevRI joined #gluster
13:49 B21956 joined #gluster
13:50 harold_ joined #gluster
13:53 dusmant joined #gluster
13:56 davinder8 joined #gluster
13:57 jmarley joined #gluster
13:57 jmarley joined #gluster
13:59 mortuar joined #gluster
14:03 zaitcev joined #gluster
14:05 mortuar joined #gluster
14:07 mayae joined #gluster
14:08 lalatenduM_ joined #gluster
14:13 coredump joined #gluster
14:16 Slashman joined #gluster
14:25 jobewan joined #gluster
14:28 mortuar joined #gluster
14:30 Ark joined #gluster
14:33 rotbeard joined #gluster
14:34 zyxe joined #gluster
14:35 rjoseph joined #gluster
14:37 coredump joined #gluster
14:39 Chewi joined #gluster
14:39 jcsp joined #gluster
14:39 shyam1 joined #gluster
14:50 raghug joined #gluster
14:51 Matthaeus joined #gluster
14:52 Chewi overclk: hi Venky. hope you're back now, seems you've been away? I would appreciate some help with http://supercolony.gluster.org/pipermail/gluster-users/2014-May/040431.html
14:52 glusterbot Title: [Gluster-users] Possible to pre-sync data before geo-rep? (at supercolony.gluster.org)
14:53 mortuar joined #gluster
14:54 jcsp joined #gluster
14:54 kdhananjay joined #gluster
14:55 bene2 joined #gluster
14:55 mayae joined #gluster
14:56 overclk Chewi: sure
14:56 Chewi overclk: thanks
14:56 overclk Chewi: I'll take a look, was away for a while...
14:59 overclk Chewi: so, you synced data before creating volumes. That would not guarantee gfids to be in sync (unless you took care of that too?).
14:59 keytab joined #gluster
15:00 Chewi overclk: at this point, I'm not entirely sure what gfids are. after creating the volumes and starting replication, I couldn't find any such directory on either side, which seems strange. plus I got those errors.
15:01 JustinClift *** Weekly GlusterFS Community Meeting now starting in #gluster-meeting (irc.freenode.net) ###
15:02 Chewi overclk: the .glusterfs directories did appear though
15:02 overclk Chewi: Any entity created in a gluster volume will be assigned a GFID (analogous to inode numbers).
15:03 Chewi overclk: is it because I created the volumes with the data already present (using force) instead of starting from an empty directory?
15:04 marbu joined #gluster
15:04 zerick joined #gluster
15:04 overclk Chewi: In that case the GFIDs (an extended attribute for file/directory) would not be there. Gluster would assign a gfid if it's not there on a lookup (e.g. on a stat())
15:05 overclk Chewi: Even if this happends, the GFIDs would not be similar b/w master and slave (and geo-rep 3.5 expects this).
15:05 Chewi overclk: okay, that makes sense. so to pre-sync, I have to sync the gfids too?
15:05 _dist joined #gluster
15:08 kshlm joined #gluster
15:08 overclk Chewi: logically yes. But I would still recommend that geo-sync take care of the intital sync. Tinkering *things* in the backend directly is something that is not recommended.
15:08 hagarth1 joined #gluster
15:09 Chewi overclk: I'll give it a try for what it's worth. I can take an LVM snapshot just in case. ;) thanks a lot
15:10 plarsen joined #gluster
15:10 samkottler joined #gluster
15:10 overclk Chewi: sure. Let me know how does things go. I'll be happy to help :)
15:16 mayae joined #gluster
15:21 Chewi overclk: ah wait, the .glusterfs directory holds the gfids? I was looking for a directory named ".gfid" because that's what the error message mentioned but I still can't one anywhere.
15:25 Chewi ...and these are hardlinks so I need rsync -H
15:26 Chewi I'll save this can of worms for tomorrow ;)
15:33 1JTAAEPLF joined #gluster
15:36 overclk Chewi: .glusterfs is the backend gfid (yes, they are hard/soft links) [for dirs they are symlinks]
15:45 bala joined #gluster
15:47 jag3773 joined #gluster
15:49 jbd1 joined #gluster
15:53 deepakcs joined #gluster
15:57 Ramereth joined #gluster
15:59 cxx joined #gluster
16:00 cxx Hello, I'm on a fresh CentOS-installation (minimal) and just added gluster-repository and installed glusterfs-server. However, after "service glusterd start" nothing happens. Are there known missing dependencies?
16:05 jag3773 joined #gluster
16:07 semiosis doubt it.  have you disabled selinux?
16:07 semiosis also, what are you expecting to happen?  maybe the daemon is running
16:08 semiosis does 'gluster peer status' or 'gluster volume info' work?
16:08 sputnik13 joined #gluster
16:10 mbukatov joined #gluster
16:10 jag3773 joined #gluster
16:14 haomaiwa_ joined #gluster
16:16 cxx i just enabled epel and reinstalled and it started (no more dependencies were installed, still wondering what was wrong ;) but now rpcbind is not started correctly to do nfs-mounts on remote hosts
16:17 cxx we will reinstall with a basic-server-packageset instead of minimal packageset. We were using this before and nevere experienced such problems.
16:20 cxx I was expecting "# service glusterd start \n Starting glusterd:                                         [  OK  ]" but nothing happened. Gluster wasn't running (ps aux|grep glus did not show anything).
16:20 sjoeboo joined #gluster
16:21 JoeJulian cxx: glusterd -d
16:25 jrcresawn joined #gluster
16:26 cxx JoeJulian: thx, it was running now, bit nfs-mounts are not possible. I'm trying to determine missing packages in minimal packageset. Something must be missing as it works perfectly out of the box when basic-server is chosen in initial os-installation (CentOS 6.5 btw).
16:28 JoeJulian @nfs
16:28 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
16:28 Mo_ joined #gluster
16:29 JoeJulian cxx: If you find that there's a dependency missing, please file a bug report.
16:29 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
16:29 jrcresawn joined #gluster
16:31 Matthaeus joined #gluster
16:33 jcsp left #gluster
16:33 mortuar joined #gluster
16:35 mayae joined #gluster
16:41 bene2 joined #gluster
16:50 XpineX joined #gluster
16:52 rwheeler joined #gluster
16:55 XpineX_ joined #gluster
17:11 jason__ joined #gluster
17:14 Slashman joined #gluster
17:21 ramteid joined #gluster
17:29 Matthaeus1 joined #gluster
17:37 zerick joined #gluster
17:37 sputnik13 joined #gluster
17:42 Licenser joined #gluster
17:46 sputnik13 joined #gluster
17:49 sputnik13 joined #gluster
17:57 kmai007 joined #gluster
17:58 mjsmith2 joined #gluster
17:59 atrius` joined #gluster
18:02 XpineX_ joined #gluster
18:06 sputnik13 joined #gluster
18:07 Licenser joined #gluster
18:10 _dist does anyone here know what gluster does special when its' on ZFS? (cause it knows)
18:10 hchiramm__ joined #gluster
18:24 hchiramm__ joined #gluster
18:28 Licenser joined #gluster
18:33 plarsen joined #gluster
18:34 dusmant joined #gluster
18:35 sputnik13 joined #gluster
18:38 mayae joined #gluster
18:49 Licenser joined #gluster
18:56 bene2 joined #gluster
18:58 jason2 joined #gluster
18:59 semiosis JoeJulian: naked ping
18:59 glusterbot semiosis: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
18:59 JoeJulian lol
19:00 semiosis ready to continue making debs?
19:00 sputnik13 joined #gluster
19:00 JoeJulian No, I'm having a severe problem with a server that thinks it's smarter than me.
19:01 semiosis ok good luck with that
19:01 JoeJulian Like I can ping from A->B through ipsec, but not B->A.
19:04 jag3773 joined #gluster
19:05 bene4 joined #gluster
19:06 semiosis routes
19:06 ctria joined #gluster
19:08 JoeJulian routes wouldn't allow the return packet from A->B
19:08 JoeJulian Does that boggle your mind yet? ;)
19:13 sputnik13 joined #gluster
19:15 MacWinner joined #gluster
19:19 Licenser joined #gluster
19:19 JoeJulian Aha! Fixed it.
19:19 JoeJulian Not sure how, but right now I really don't care.
19:19 JoeJulian ... since it's for my old job...
19:43 Licenser joined #gluster
19:47 chirino joined #gluster
19:48 Matthaeus joined #gluster
19:51 ndk joined #gluster
19:52 Ark joined #gluster
20:03 mortuar joined #gluster
20:04 jason2 joined #gluster
20:12 mayae joined #gluster
20:19 Ark joined #gluster
20:27 nneul joined #gluster
20:29 nneul when doing a new gluster setup, intended for low load (mianly just data files related to a service that are typically write-once and logs) - is there any recovery benefit to having three nodes vs. two? i.e. in most clusters, you need three nodes for tiebreaking, but doesn't look like that is applicable for gluster
20:30 nneul was thinking of two nodes in primary site, and a third brick at a remote site (remote, though <10ms latency and high bandwidth)
20:33 _dist nneul: depends on the volume type, type of client connetions etc
20:34 nneul in order to not have to do other failover mechanisms, was intending to use the native fuse client.
20:34 nneul layout would likely be 4 (possibly 6) client systems talking to either 2 or 3 bricks.
20:34 _dist will any clients be located at the remote site?
20:34 nneul basically, needing a shared filesystem backend for a HA cluster.
20:35 nneul yes, but unless everything has failed over to the remote site, there won't be any significant traffic from them.
20:35 nneul in which case, primary site would be expected to be offline.
20:35 nneul the other scenario is regular maintenance where I rotate through outages on each server
20:36 nneul in that circumstance is there any difference from the client perspective between: 1 client -> (2 out of 3 bricks) vs (1 out of 2 bricks)
20:36 _dist the only danger I'm considering is one where you remote gets cut but local clients there are still working, depending on your setup could be a problem where both a & b think they are the most correct (split-brain)
20:37 nneul true... in my case, my file IO behavior is such that I could 'rsync --newer' on all the storage nodes, with the caveat that it doesn't handle deletes...
20:38 _dist you could either give you "main" area quorum by having more there, or you could give the replica 2 a try and _try_ to break it by simulating situations like that
20:38 _dist I mean test, guess I'm getting tired
20:38 nneul so gluster will do quorum behavior in the event of recoveries? (I haven't been able to find much good intro on failure modes/events/etc.)
20:39 _dist http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#cluster.quorum-type
20:39 glusterbot Title: Gluster 3.2: Setting Volume Options - GlusterDocumentation (at gluster.org)
20:39 kke joined #gluster
20:40 nneul in case of any interest - the use case is primarily storing audio/media files for voicemail and IVR for a voip system.
20:40 partner joined #gluster
20:40 Peanut joined #gluster
20:40 portante joined #gluster
20:40 portante joined #gluster
20:40 codex joined #gluster
20:40 oxidane joined #gluster
20:40 georgeh|workstat joined #gluster
20:40 prasanth|offline joined #gluster
20:40 prasanth|offline joined #gluster
20:40 nage joined #gluster
20:41 nneul if there are split-brain - but not touching the same file - will it resolve on it's own?
20:41 nneul i.e. file A written while split on left, and file B written while split on right?
20:41 semiosis that's not split brain
20:41 semiosis split brain is, by definition, when it can't be automatically healed
20:42 nneul ok, so it's purely on a single-file basis, not on the volume level
20:42 nneul or single chunk in the case of striped?
20:43 zerick joined #gluster
20:44 JoeJulian semiosis: I'm finally ready to get back to this deb.
20:45 semiosis never again will i order corsair memory
20:47 JoeJulian I always order crucial.
20:48 semiosis ever had a problem with it?
20:48 JoeJulian Never with crucial. I have with everyone else at one time or another.
20:48 JoeJulian Probably means that crucial's due for a bad batch. :D
20:58 JoeJulian semiosis: ok, so I have this debian tree in the source tree, now what?
20:58 semiosis what version of the source tree?
20:58 JoeJulian The one that was in https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.4/+files/glusterfs_3.4.4.orig.tar.gz
20:59 semiosis great, that makes it easy
20:59 semiosis apply the patch to the source tree
21:00 semiosis then run command 'dpkg-source --commit' in the root of the source tree
21:00 tdasilva left #gluster
21:00 semiosis this will create a patch for the package in debian/patches/patchname, and ask you for some input about the patch
21:01 JoeJulian apt-get install git is kind-of a finger-twister.
21:02 semiosis cant you just download the patch from gerrit?
21:03 jason__ joined #gluster
21:04 JoeJulian not that I'm aware of
21:05 semiosis http://review.gluster.org/#/c/8029/1/xlators/cluster/dht/src/dht-helper.c
21:05 glusterbot Title: Gerrit Code Review (at review.gluster.org)
21:06 semiosis just make that change by hand to the source tree
21:06 semiosis that's from: [12:53] <hagarth> JoeJulian: backport for release-3.4 - http://review.gluster.org/8029
21:06 glusterbot Title: Gerrit Code Review (at review.gluster.org)
21:08 Matthaeus joined #gluster
21:12 semiosis JoeJulian: ?
21:12 JoeJulian done.
21:13 semiosis did you create the patch in debian/patches with dpkg-source?
21:13 JoeJulian yep
21:13 semiosis next up is to edit the debian changelog, which you can do by hand
21:14 semiosis duplicate the top section, increment the package patch version (precise1 -> precise2) on the first line and change my name/email to yours on the last line
21:14 semiosis there's a command for this (dch) but i never use it
21:15 jason2 joined #gluster
21:15 semiosis once you have the changelog updated appropriately run the command 'debuild -S -sa -uc' (iirc) in the root of the source tree
21:15 semiosis if that produces errors, please pastie them for me
21:16 semiosis (unless you can resolve yourself)
21:17 semiosis if that goes well then in the parent directory (just above the root of the source tree) you should have a few new files named for your new (precise2) package version
21:17 semiosis among them a .dsc, a .debian.tar.gz, and a .changes file
21:17 semiosis you now have a source package with your added patch, ready to build
21:18 semiosis please let me know if you've got to this point before I continue
21:18 weykent joined #gluster
21:18 silky joined #gluster
21:19 osiekhan1 joined #gluster
21:19 JoeJulian looks like it worked
21:19 semiosis you installed the pbuilder package already right?
21:20 JoeJulian yes
21:20 semiosis pbuilder maintains a chrooted base distro and manages package builds on it, it's the easy way to make packages
21:20 JoeJulian btw... just fyi, the only dependency that was missing was debhelper
21:21 semiosis build-essential should have pulled that in
21:21 semiosis i thought
21:21 semiosis ok, noted
21:21 semiosis so we now need to set up the base for pbuilder
21:21 semiosis do this with 'sudo pbuilder --create --distribution precise' (if you're building packages for precise)
21:22 semiosis this will take some time
21:23 JoeJulian as an aside... I upgraded my desktop. Mirrored 2TB hybrid drives. Nice and fast!
21:23 semiosis wow
21:23 semiosis what do you need 2TB for?
21:24 JoeJulian It was $30 more than 1TB...
21:24 sjm left #gluster
21:24 JoeJulian It's like buying popcorn.
21:24 semiosis SSD for me
21:24 JoeJulian Do I NEED the super-jumbo fat-builder bucket? NO! ... but it's only $1 more.
21:24 Matthaeus You don't ever ask what a person needs so much storage for.
21:25 JoeJulian hehe
21:25 semiosis Matthaeus: lol
21:26 JoeJulian Matthaeus: you run sas drives don't you?
21:28 JoeJulian Someone else in here has sas drives... out with it!
21:29 JoeJulian I'm going to naked ping everyone in this channel until I get a hit... ;)
21:29 Matthaeus JoeJulian: I run sata, sorry.
21:29 sadbox joined #gluster
21:30 dblack joined #gluster
21:32 semiosis JoeJulian: how's pbuilder going?
21:33 JoeJulian m0zes, mjrosenb, Ramereth : How about you? SAS?
21:33 JoeJulian semiosis: just finished.
21:33 semiosis no errors?
21:33 JoeJulian nope
21:33 semiosis good
21:33 m0zes JoeJulian: nearline-sas here.
21:33 Intensity joined #gluster
21:33 JoeJulian m0zes: What's your smart data "Non-medium error count" look like?
21:34 jiffe98 I'm guessing the windows client using winfuse is scrapped?
21:34 m0zes JoeJulian: that isn't exposed by my raid controllers.
21:34 calum_ joined #gluster
21:34 JoeJulian damn
21:35 semiosis JoeJulian: now (try to) build the package with 'sudo pbuilder --build $thedscfile' where that's teh new dsc file you created with debuild a few mins ago
21:37 Ramereth we're pretty heavy sas users here
21:37 Ramereth i was just doing some research on hybrid drives recently. Still not sure about it
21:38 XpineX__ joined #gluster
21:38 JoeJulian Ramereth: same question. "Non-medium error count". I don't think it should be as high as I'm seeing.
21:39 Ramereth JoeJulian: not sure, I would have to look. I mostly use the propriety tools to check the raid/disk status
21:44 JoeJulian semiosis: looks like it completed without error.
21:44 semiosis woo!
21:44 semiosis your debs are in /var/cache/pbuilder/result
21:44 semiosis gotta say i'm a little bit shocked we got this going the first try
21:44 JoeJulian hehe
21:45 semiosis it should have hurt a little bit
21:45 JoeJulian Well, I started with what you already had working...
21:45 XpineX__ joined #gluster
21:46 elico joined #gluster
21:51 n0de joined #gluster
21:53 frankenspanker joined #gluster
21:53 Matthaeus joined #gluster
22:04 rwheeler joined #gluster
22:11 jrcresawn joined #gluster
22:18 bgpepi joined #gluster
22:24 chirino_m joined #gluster
22:25 JoeJulian frankenspanker: Hey there. Just saw the pm (I don't check for them very often and they're below the scroll).
22:25 jag3773 joined #gluster
22:25 JoeJulian frankenspanker: What version of glusterfs are you using?
22:25 Matthaeus joined #gluster
22:26 frankenspanker we are using 3.4
22:28 jason_ joined #gluster
22:28 JoeJulian Hrm, that's what I developed that on. fetch_volfile is supposed to retrieve /var/lib/glusterd/vols/$volume/$volume-fuse.vol for $volume from a server.
22:30 jason2 joined #gluster
22:32 frankenspanker is there a possible problem because I'm running it from one of the 2 servers in the cluster
22:33 frankenspanker the while loop in fetchvol.py only goes through 1 time, meaning the port being used is 1
22:35 JoeJulian frankenspanker: check /var/log/glusterfs/etc-glusterfs-glusterd.vol.log for errors.
22:38 fidevo joined #gluster
22:39 frankenspanker usr/lib64/glusterfs/3.4.0/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
22:39 XpineX_ joined #gluster
22:39 frankenspanker volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
22:40 JoeJulian That's normal
22:41 JoeJulian frankenspanker: Try splitmount against one of the servers, then "tail -n 50 /var/log/glusterfs/etc-glusterfs-glusterd.vol.log" on the server you specified then go to fpaste.org, paste that and paste the link it generates here.
22:45 frankenspanker http://ur1.ca/hi9vo
22:45 glusterbot Title: #109108 Fedora Project Pastebin (at ur1.ca)
22:46 frankenspanker nothing is being logged there from the attempted splitmount unfortunately
22:47 frankenspanker i'm doing this from one of the 2 servers in the cluster if that matters
22:47 frankenspanker i don't have another environment to run this from at the moment
22:51 zerick joined #gluster
22:51 sh_t joined #gluster
22:59 sh_t hey folks. can anyone here confirm if the write behind window size option actually does anything? I'm looking into options for speeding up write IO and this parameter seems to have no effect.
22:59 sh_t i suppose i should have said flush-behind being enabled in combination with the window size :)
23:08 savorywatt___ joined #gluster
23:08 JoeJulian frankenspanker: in fetchvol.py, change RPC Version to 1 and see if that works.
23:09 frankenspanker i will give that a try, thanks
23:10 JoeJulian sh_t: I have no empirical evidence. Do let me know if you come up with any.
23:11 JoeJulian ... and pick a different nick... :)
23:11 jason2 joined #gluster
23:11 frankenspanker hehe ok will do
23:14 sh_t well JoeJulian I thought the idea was clear.. aggregate writes together and write them in the background without blocking. it seems to have zero impact for me
23:14 sh_t I've seen a few people online saying it has no impact.. just wondering if its broken or what
23:19 frankenspanker hey JoeJulian, i tried RPC Version 1 and had no effect....interesting thing is I have some debugging which prints out the value of vollen in the script, and with RPC version set to 2, vollen comes back as -16, but when its version 1, vollen comes back as 0
23:20 JoeJulian frankenspanker: Ok, I think I have an idea what's going on. I'll try to take a look at that tonight.
23:21 frankenspanker ok, thanks Joe
23:25 chirino joined #gluster
23:33 mortuar joined #gluster
23:52 frankenspanker left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary