Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:46 Mark__ joined #gluster
00:49 Mark__ Hi all, I just have a quick question regarding geo-replication, is it possible to setup "two-way" geo replication? By that I mean setup the cluster2 to slave of cluster1 and then setup gluster1 to slave of gluster2... Or will that just cause issues with circular replication?
00:50 Mark__ Thinking along the lines of multi-master replication in MySQL if that helps
00:54 Arminder joined #gluster
00:55 T3 joined #gluster
00:56 Arminder joined #gluster
00:56 glusterbot News from newglusterbugs: [Bug 1212660] Crashes in logging code <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212660>
00:57 Arminder joined #gluster
00:58 Arminder joined #gluster
00:59 Arminder joined #gluster
01:01 T3 joined #gluster
01:02 Arminder joined #gluster
01:03 Arminder joined #gluster
01:04 Arminder joined #gluster
01:05 plarsen joined #gluster
01:12 plarsen joined #gluster
01:15 JoeJulian trig: I would certainly start with network. All clients should be able to connect to all servers on these ,,(ports).
01:15 glusterbot trig: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
01:16 JoeJulian trig: "gluster volume status" will tell you which ports the bricks are currently listening on. You can test connectivity with telnet.
01:16 JoeJulian Mark__: no two-way geo-rep yet.
01:17 JoeJulian @lucky mariadb galera
01:17 glusterbot JoeJulian: https://mariadb.com/kb/en/mariadb​/what-is-mariadb-galera-cluster/
01:17 JoeJulian Mark__: ^^
01:20 Mark__ Thanks
01:20 Mark__ We are using Galera at the moment, having mixed results with it
01:24 Mark__ left #gluster
01:40 badone__ joined #gluster
01:53 harish joined #gluster
02:00 gildub joined #gluster
02:14 Pupeno_ joined #gluster
02:24 maveric_amitc_ joined #gluster
02:36 nangthang joined #gluster
03:02 kdhananjay joined #gluster
03:05 atinmu joined #gluster
03:23 huleboer joined #gluster
03:35 Pupeno joined #gluster
03:54 poornimag joined #gluster
03:56 itisravi joined #gluster
04:03 shubhendu joined #gluster
04:06 kanagaraj joined #gluster
04:07 bharata-rao joined #gluster
04:10 lalatenduM joined #gluster
04:11 Arminder- joined #gluster
04:16 vimal joined #gluster
04:20 rafi joined #gluster
04:20 anoopcs joined #gluster
04:26 RameshN_ joined #gluster
04:27 glusterbot News from newglusterbugs: [Bug 1212676] NetBSD port <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212676>
04:29 ira joined #gluster
04:30 hchiramm joined #gluster
04:30 kshlm joined #gluster
04:33 Manikandan joined #gluster
04:33 jiffin joined #gluster
04:33 Manikandan_ joined #gluster
04:35 spandit joined #gluster
04:41 Arminder joined #gluster
04:42 haomaiwang joined #gluster
04:42 Arminder joined #gluster
04:46 kumar joined #gluster
04:48 Arminder- joined #gluster
04:51 hgowtham joined #gluster
04:54 hgowtham joined #gluster
04:55 Arminder joined #gluster
04:55 nbalacha joined #gluster
04:56 ashiq joined #gluster
04:56 pppp joined #gluster
04:56 Arminder- joined #gluster
05:01 Arminder joined #gluster
05:02 Arminder- joined #gluster
05:06 Arminder joined #gluster
05:09 Arminder- joined #gluster
05:11 Arminder joined #gluster
05:13 ppai joined #gluster
05:15 Arminder- joined #gluster
05:15 Bhaskarakiran joined #gluster
05:16 Arminder joined #gluster
05:19 Arminder joined #gluster
05:20 Arminder joined #gluster
05:21 Arminder joined #gluster
05:21 schandra joined #gluster
05:22 Arminder joined #gluster
05:23 corretico joined #gluster
05:25 sripathi joined #gluster
05:26 Arminder joined #gluster
05:29 Arminder joined #gluster
05:30 Pupeno_ joined #gluster
05:30 Arminder joined #gluster
05:31 aravindavk joined #gluster
05:31 Arminder- joined #gluster
05:32 gem joined #gluster
05:32 Arminder- joined #gluster
05:35 Arminder joined #gluster
05:37 gem_ joined #gluster
05:37 SOLDIERz joined #gluster
05:38 Arminder- joined #gluster
05:39 Arminder- joined #gluster
05:40 Arminder- joined #gluster
05:43 maveric_amitc_ joined #gluster
05:43 Arminder joined #gluster
05:47 Arminder joined #gluster
05:51 karnan joined #gluster
05:57 deepakcs joined #gluster
05:58 Arminder- joined #gluster
06:01 anil joined #gluster
06:05 harish joined #gluster
06:06 Arminder joined #gluster
06:07 Arminder- joined #gluster
06:08 overclk joined #gluster
06:09 lalatenduM joined #gluster
06:09 nishanth joined #gluster
06:12 Arminder joined #gluster
06:13 jtux joined #gluster
06:18 Arminder joined #gluster
06:19 atalur joined #gluster
06:22 gem_ joined #gluster
06:23 aravindavk joined #gluster
06:25 soumya joined #gluster
06:27 glusterbot News from newglusterbugs: [Bug 1212684] GlusterD segfaults when started with management SSL <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212684>
06:28 Arminder joined #gluster
06:29 Arminder- joined #gluster
06:33 corretico joined #gluster
06:33 Arminder joined #gluster
06:34 hchiramm joined #gluster
06:34 meghanam joined #gluster
06:36 ghenry joined #gluster
06:36 Iodun joined #gluster
06:37 Arminder- joined #gluster
06:38 Arminder joined #gluster
06:39 Arminder joined #gluster
06:43 Arminder- joined #gluster
06:48 Arminder joined #gluster
06:49 Arminder- joined #gluster
06:49 jtux joined #gluster
06:53 Arminder joined #gluster
06:55 Arminder joined #gluster
06:59 Arminder- joined #gluster
07:01 poornimag joined #gluster
07:03 Arminder joined #gluster
07:05 atinmu joined #gluster
07:07 [Enrico] joined #gluster
07:07 Arminder- joined #gluster
07:09 Arminder joined #gluster
07:09 XpineX joined #gluster
07:11 Arminder- joined #gluster
07:12 Arminder- joined #gluster
07:16 deniszh joined #gluster
07:16 cornus_ammonis joined #gluster
07:16 Arminder joined #gluster
07:18 Arminder- joined #gluster
07:33 SOLDIERz joined #gluster
07:38 Arminder joined #gluster
07:38 Slashman joined #gluster
07:39 fsimonce joined #gluster
07:39 Arminder- joined #gluster
07:43 Arminder joined #gluster
07:44 Arminder- joined #gluster
07:49 Arminder joined #gluster
07:52 Arminder- joined #gluster
07:52 gem_ joined #gluster
07:54 Arminder joined #gluster
07:56 R0ok_ joined #gluster
07:57 ktosiek joined #gluster
07:58 LebedevRI joined #gluster
08:03 liquidat joined #gluster
08:03 XpineX joined #gluster
08:07 gem_ joined #gluster
08:12 calston left #gluster
08:19 SOLDIERz joined #gluster
08:21 SOLDIERz hello everyone quick question about glusterfs
08:22 SOLDIERz got a question regarding glusterfs performance we got a 12 node cluster with glusterfs with replica count of 3. This week we got an outage of one node which should be basically catch up from two others nodes because we arranged the cluster in that way
08:24 SOLDIERz but we saw in our statistics that the cluster still got performance issues so the aprrox respones time rised from 0.2 sec to a 1 second
08:24 SOLDIERz so this is basically 3 times slower is there any explanation for such behaviour?
08:25 atalur joined #gluster
08:26 ctria joined #gluster
08:26 SOLDIERz to give you a better view the gluster is arranged basically in that way http://pastebin.com/5QXqyrPP
08:26 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
08:27 Norky joined #gluster
08:27 SOLDIERz so you see node2 and node6 are part of one gluster cluster which should basically help out if node 8 fails
08:28 SOLDIERz but the were performance drops for basically no reason
08:28 SOLDIERz it was node 8 which failed
08:28 T0aD joined #gluster
08:29 atalur joined #gluster
08:34 hagarth joined #gluster
08:38 SOLDIERz joined #gluster
08:40 karnan joined #gluster
08:50 pcaruana joined #gluster
08:52 Pupeno joined #gluster
08:59 Leildin joined #gluster
09:00 anrao joined #gluster
09:01 Arminder joined #gluster
09:02 Arminder joined #gluster
09:05 Arminder joined #gluster
09:06 Arminder joined #gluster
09:07 Arminder joined #gluster
09:08 gem_ joined #gluster
09:10 Peppard joined #gluster
09:12 Arminder joined #gluster
09:16 hagarth joined #gluster
09:16 rafi joined #gluster
09:17 SOLDIERz joined #gluster
09:18 SOLDIERz hello everyone quick question about glusterfs got a question regarding glusterfs performance we got a 12 node cluster with glusterfs with replica count of 3. This week we got an outage of one node which should be basically catch up from two others nodes because we arranged the cluster in that way but we saw in our statistics that the cluster still got performance issues so the aprrox respones time rised from 0.2 sec to a 1 second
09:18 SOLDIERz so this is basically 3 times slower is there any explanation for such behaviour?
09:18 SOLDIERz to give you a better view the gluster is arranged basically in that way http://pastebin.com/5QXqyrPP
09:18 SOLDIERz so you see node2 and node6 are part of one gluster cluster which should basically help out if node 8 fails but the were performance drops for basically no reason it was node 8 which failed
09:18 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:23 gem_ joined #gluster
09:30 RaSTar ndevos: need your inputs on http://review.gluster.org/#/c/9797/
09:32 Slashman joined #gluster
09:34 ndevos RaSTar: yes, I know... sorry for the delay
09:35 ndevos RaSTar: I am not aware of any usages like point 4 in your comment, but yes, PLEASE mail gluster-devel about it :)
09:35 * ndevos goes offline again, and will be back later
09:37 ashiq joined #gluster
09:44 ppai joined #gluster
09:55 anrao joined #gluster
09:58 glusterbot News from newglusterbugs: [Bug 1212762] gluster volume info api is broken <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212762>
10:03 hchiramm joined #gluster
10:12 hchiramm joined #gluster
10:12 hchiramm joined #gluster
10:12 anoopcs joined #gluster
10:13 hchiramm joined #gluster
10:15 RaSTar thanks ndevos, ndevos++
10:15 glusterbot RaSTar: ndevos's karma is now 12
10:16 nishanth joined #gluster
10:23 rafi ls
10:26 hchiramm joined #gluster
10:41 Debloper joined #gluster
10:44 thangnn_ joined #gluster
10:44 hchiramm joined #gluster
10:49 aravindavk joined #gluster
10:56 jmarley joined #gluster
10:56 gem_ joined #gluster
10:57 ira joined #gluster
10:59 ashiq joined #gluster
11:01 hagarth joined #gluster
11:06 SOLDIERz joined #gluster
11:10 rjoseph joined #gluster
11:10 SOLDIERz hello everyone quick question about glusterfs got a question regarding glusterfs performance we got a 12 node cluster with glusterfs with replica count of 3. This week we got an outage of one node which should be basically catch up from two others nodes because we arranged the cluster in that way but we saw in our statistics that the cluster still got performance issues so the aprrox respones time rised from 0.2 sec to a 1 second
11:10 SOLDIERz so this is basically 3 times slower is there any explanation for such behaviour?
11:10 SOLDIERz to give you a better view the gluster is arranged basically in that way http://pastebin.com/5QXqyrPP
11:10 SOLDIERz so you see node2 and node6 are part of one gluster cluster which should basically help out if node 8 fails but the were performance drops for basically no reason it was node 8 which failed
11:10 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
11:14 anrao joined #gluster
11:17 sakshi joined #gluster
11:28 RobertLaptop joined #gluster
11:29 harish joined #gluster
11:32 dgandhi joined #gluster
11:46 gem__ joined #gluster
11:46 julim joined #gluster
11:56 ppai joined #gluster
11:57 jdarcy joined #gluster
11:59 glusterbot News from newglusterbugs: [Bug 1212816] NFS-Ganesha: Handle peer probe and peer detach when features.ganesha is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212816>
12:01 Guest97771 joined #gluster
12:02 gem__ joined #gluster
12:17 rjoseph joined #gluster
12:20 chirino joined #gluster
12:27 ashiq joined #gluster
12:29 glusterbot News from newglusterbugs: [Bug 1212822] Data Tiering: link files getting created on hot tier for all the files in cold tier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212822>
12:34 harish joined #gluster
12:40 rjoseph joined #gluster
12:45 B21956 joined #gluster
12:49 corretico joined #gluster
12:50 wkf joined #gluster
12:59 anrao joined #gluster
12:59 glusterbot News from newglusterbugs: [Bug 1212830] Data Tiering: AFR(replica) self-heal deamon details go missing on attach-tier <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212830>
13:00 baoboa joined #gluster
13:00 DV_ joined #gluster
13:17 DV_ joined #gluster
13:18 hagarth joined #gluster
13:22 gnudna joined #gluster
13:28 jmarley joined #gluster
13:29 jkroon joined #gluster
13:29 jkroon home.log-20150412:[2015-04-08 17:45:36.519856] W [afr-inode-read.c:1927:afr_readv] 0-gv_home-replicate-0: Failed on 81f65e2d-475f-47b1-a030-e973f2c5b3d3 as split-brain is seen. Returning EIO.
13:29 glusterbot News from newglusterbugs: [Bug 1212842] tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212842>
13:29 hamiller joined #gluster
13:29 jkroon hi all, seeing that when I issue apache2 reload with logs on glusterfs.
13:29 jkroon not always, but frequently enough to cause some issues.
13:30 jkroon simply restarting apache again directly after that and all works fine ...
13:30 georgeh-LT2 joined #gluster
13:31 nottc joined #gluster
13:38 nbalacha joined #gluster
13:38 SOLDIERz joined #gluster
13:41 Gill_ joined #gluster
14:03 deeville joined #gluster
14:05 atinmu joined #gluster
14:08 deeville hi folks, what's your opinion for best practices around root-squashing for client access to gluster volumes? I had server.root-squashing set to on, but i find that I see a lot of permission errors in glustershd.log. One of them was this Perl subfolder that had no permissions.
14:10 deeville when server.root-squashing = off, the problems seem to go away. However, now I have the issue of local sudo, and sudo to the mounted gluster-volumes
14:11 jkroon root squashing comes from nfs, is relatively easy to bypass anyway, so if it's the same (similar?) then imho, don't bother.
14:15 plarsen joined #gluster
14:15 bennyturns joined #gluster
14:15 the-me joined #gluster
14:17 deepakcs joined #gluster
14:18 deeville jkroon, thanks for the reply. How would you then prevent local sudoers from doing evil things on the mounted volumes? Do you just restrict what they can do via the /etc/sudoers file?
14:19 jkroon if they can become root outright they can su to the owner of files anyway.
14:19 jkroon so root squash is semi pointless in that respect.
14:20 jkroon and yes, you should change your mindset, don't restrict them, rather allow them to do certain (trusted?) things.
14:20 jkroon ie:  outright reject everything and only allow what you trust them with.
14:21 gnudna deeville /path/to/bin in sudo
14:25 deeville jkroon, thanks for reminding me, I actually do that all the time. lol
14:25 deeville gnudna, thanks as well!
14:33 lalatenduM joined #gluster
14:50 LebedevRI joined #gluster
14:51 _Bryan_ joined #gluster
14:54 roost__ joined #gluster
14:55 Guest79523 joined #gluster
14:59 LostPlanet left #gluster
15:05 DV__ joined #gluster
15:09 corretico joined #gluster
15:18 nangthang joined #gluster
15:24 xiu JoeJulian: hi, we talked about https://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/ earlier this week. Do you know a way to reproduce this issue? I'm still trying to reproduce this on a test cluster without success.
15:25 dgandhi joined #gluster
15:34 ghenry joined #gluster
15:35 coredump joined #gluster
15:38 lalatenduM joined #gluster
15:46 cholcombe joined #gluster
15:47 kdhananjay joined #gluster
15:47 Le22S joined #gluster
16:12 Gill joined #gluster
16:16 coredump joined #gluster
16:17 RayTrace_ joined #gluster
16:19 dberry joined #gluster
16:20 dberry joined #gluster
16:22 nbalacha joined #gluster
16:36 Gill_ joined #gluster
16:40 ChrisHolcombe joined #gluster
16:40 JoeJulian xiu: Use a broken kernel with that old version of gluster. Format the brick ext4. Make a bunch of files in a directory. mount the client. ls.
16:42 JoeJulian The ability to hit that bug depends on some set of circumstances that I'm not entirely clear on. Something to do with finename hashes (not using gluster's hashing algorithm, but ext's) I think.
16:43 JoeJulian Also there has to be more than MTU of data in the directory listing.
16:45 lalatenduM joined #gluster
16:47 Gill joined #gluster
16:55 soumya joined #gluster
16:58 gem__ joined #gluster
17:00 glusterbot News from newglusterbugs: [Bug 1212923] [New] - Snapshot creation fails when selinux is in Enforcing mode on RHEL7. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212923>
17:24 tg2 hey, anybody know if this is the bug that was fixed in 3.6.2? http://www.fpaste.org/212359/92912071/
17:25 tg2 @semiosis re:3.6.2 on precise - any progress?
17:47 kripper joined #gluster
17:47 kripper hi, I have a question about performance:
17:48 kripper I'm building a cluster with 3 x 1GbE servers
17:48 kripper I have to option to provide one server with SSD disks
17:49 kripper my question is if using 1 node with SSD disks will increase read/write  performance?
17:49 tg2 not sure if latest has support for tiered storage
17:49 tg2 but it was on the roadmap
18:04 gnudna any benefit in increasing the mtu to 9000 on the network?
18:04 gnudna will gluster be able to take advantage of this?
18:05 gnudna i ask cause in general i have changed the subsystem to raid0 and using replicated to preserve the data on 2 different nodes
18:05 Gill joined #gluster
18:20 plarsen joined #gluster
18:23 xiu JoeJulian: ok thanks, what do you mean by MTU ?
18:33 JoeJulian @lucky mtu
18:33 glusterbot JoeJulian: http://www.mtu.edu/
18:33 JoeJulian hrmph
18:34 JoeJulian xiu: http://en.wikipedia.org/wik​i/Maximum_transmission_unit
18:34 JoeJulian kripper: no
18:35 xiu oh ok sorry :) thought you meant something gluster specific
18:35 schwing i've been curious about this, too.  i set mine to 9000 as a test and haven't noticed any differences in my setup
18:35 xiu thanks!
18:35 JoeJulian kripper: If I had that option I would put one in each server and use it for the xfs journal.
18:37 JoeJulian gnudna, schwing: yes. Reads and writes that exceed the mtu will incur the additional 40 byte tcp header overhead. By increasing your MTU from 1500 to 9000 that overhead goes from 3% to 0.5% for data objects that can fill a packet.
18:38 jmarley joined #gluster
18:38 wkf joined #gluster
18:56 bennyturns joined #gluster
19:08 kripper We have EXT4, should we move to XFS?
19:08 JoeJulian I did.
19:08 kripper is mixing XFS and EXT4 bricks fine?
19:09 kripper OS con EXT4 and second disk with XFS
19:09 kripper OS on EXT4...
19:13 kripper The top tier of SSDs can beat rotational media for sequential write speeds. By dedicating one of these premium drives to logging you gain a few things. Yes, it'll wear faster since writes do wear SSDs down. However, if you're only using 5% of the drive (if that much) the firmware on these drives is smart enough to allow even 50% (or more) bad cells before you start getting problems with the log volume corrupting; your OS should alarm on this
19:13 JoeJulian yes
19:13 kripper great!
19:13 JoeJulian We did the math and wear time should exceed the lifetime.
19:14 kripper impressive!
19:14 kripper it deserves a comment on ServerFault
19:17 kripper can a XFS journal be stored on a gluster cluster with a 1GpE netowrk or would latency be botleneck here?
19:19 kripper are the "math" results online for linking?
19:26 JoeJulian kripper: yeah, latency would be horrible for that. And no, no math online. Hopefully I can do a talk on the recent work I've been doing, soon.
19:44 tuxcrafter joined #gluster
19:50 redbeard joined #gluster
19:52 hamiller joined #gluster
20:02 ctria joined #gluster
20:17 DV__ joined #gluster
20:30 p8952 joined #gluster
20:38 kripper when using Replicate, does a gluster increase performance if one server is using SSD?
20:38 JoeJulian no
20:38 JoeJulian well
20:38 JoeJulian it could improve read performance.
20:38 JoeJulian it's possible.
20:38 kripper right
20:39 kripper but I could create a special remote SSD volume
20:39 JoeJulian You would set "cluster.read-subvolume" to be the ssd.
20:44 rshade98 joined #gluster
20:44 gnudna left #gluster
20:45 rshade98 hey guys, anyone else use the semiosis packages, and know what is wrong with this: ppa.launchpad.net/semiosis/​ubuntu-glusterfs-3.5/ubuntu
20:45 JoeJulian @ppa
20:45 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
20:45 JoeJulian They moved so there could be collaboration.
20:46 semiosis JoeJulian: can we get glusterbot to automatically give the ppa factoid when someone says launchpad.net/semiosis ?
20:46 semiosis hi nate
20:49 kripper JoeJulian: if read-subvolume is set to the SSD, read operations would *only* be served by the SSD bricks?
20:49 JoeJulian launchpad.net/semiosis
20:49 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
20:49 semiosis woo!
20:49 semiosis probably should do it for launchpad.net/~semiosis as well
20:49 JoeJulian kripper: I think so, yes.
20:50 JoeJulian done
20:50 kripper JoeJulian: I'm worried about the write operations
20:50 JoeJulian Don't worry, be happy.
20:50 rshade98 @JoeJulian, @semiosis, thanks both of you
20:50 JoeJulian You're welcome.
20:50 kripper JoeJulian: I'm happy with the read operations :-)
20:51 semiosis thanks JoeJulian
20:51 JoeJulian Writes are synchronous. They have to be completed unless you change a setting to allow them to be cached.
20:51 rshade98 now I have to fix my chef cookbooks with new links.
20:51 semiosis rshade98: you really ought to mirror the packages to your own APT repo, or at least your own PPA
20:51 JoeJulian rshade98: When doing automated deployment, you really should host your own repo.
20:51 kripper caching is the enemy of HA right?
20:51 DV joined #gluster
20:52 JoeJulian caching writes is the enemy of data.
20:52 JoeJulian The only time writes should be cached is if you don't care that the data exists.
20:53 rshade98 @semiosis, @JoeJulian, hard to make it portable then. I don't want others hitting my repo
20:53 semiosis you have public chef cookbooks for gluster?
20:53 semiosis @chef
20:53 glusterbot semiosis: I do not know about 'chef', but I do know about these similar topics: 'ctdb'
20:53 semiosis rshade98: can we get a link?  people occasionally ask about that
20:53 rshade98 Yes, I am rebuilding right now.
20:53 JoeJulian semiosis: I do not know about 'chef' either. And you can't prove I do.
20:54 tg2 semiosis, do you have the build guide for ubuntu (how you build your builds for the ppa) ?
20:54 tg2 or just the ./configure options maybe
20:54 rshade98 https://github.com/RightScal​e-Services-Cookbooks/gluster
20:54 rshade98 but they can also use: https://github.com/rackspace​-cookbooks/rackspace_gluster
20:55 rshade98 we had a very rightscale specific one for awhile. I am trying to make it more public/community
20:55 semiosis tg2: the debian build process is well documented, and so are the ubuntu enhancements.  you can see a build log for every build launchpad does by digging down in the links in the PPA.
20:56 tg2 ok i'll check thanks
20:56 tg2 i have  an odd issue where the 3.6.1 from repo, after being purged, still leaves all the libs
20:56 JoeJulian rshade98: We only both jump on that because so many people rely on a 3rd party repo to ensure their production systems are working. When that repo changes, or is retired, they have no options.
20:57 semiosis tg2: the strategy to mirror a package from ubuntu repos or ppas to your own ppa is this: download source package, add a new entry to the changelog with your email (the one tied to the PGP key you authorized in your LP account), rebuild the source package, upload to your PPA.
20:57 JoeJulian I would leave that in as a default, but issue a warning if it's left at the default.
20:57 tg2 ok thanks
20:58 semiosis yw
20:58 rshade98 @JoeJulian, totally understand. I have thought about mirroring in cloudfront, but not sure yet.
20:59 rshade98 I will make the mirror an attribute so people can override, so they can fit those needs also
21:00 virusuy_ joined #gluster
21:00 kripper Thanks JoeJulian. See you guys. Enjoy your weekend!
21:00 kripper Remember, sex and sport should not displace beer!
21:01 tg2 from buildlog, http://www.fpaste.org/212482/42930446/
21:01 kripper Remembeer!
21:01 tg2 '--libdir=${prefix}/lib/x86_64-linux-gnu'
21:01 glusterbot tg2: ''s karma is now -2
21:01 tg2 that was the line
21:02 tg2 take that ''
21:02 JoeJulian What did ' ever do to you? ;)
21:02 tg2 how many times have I forgotten to escape a single quote!
21:03 tg2 who built that bot regex... was it you JoeJulian?!
21:04 tg2 semiosis+++
21:04 glusterbot tg2: semiosis+'s karma is now 1
21:04 tg2 semiosis +8
21:04 JoeJulian It was, and I'm lazy about it.
21:04 tg2 open sauce it
21:05 JoeJulian The whole 'karma' thing is not something I care about.
21:05 JoeJulian Hmm.. Let me think about how to do that. It really should be on github.
21:05 tg2 JoeJulian-- #doesn't care about karma
21:05 glusterbot tg2: JoeJulian's karma is now 19
21:05 semiosis @karma
21:05 glusterbot semiosis: Highest karma: "jjulian" (4000001), "semiosis" (2000011), and "kkeithley" (23).  Lowest karma: "-" (-343), "(" (-65), and "-rw-r--r" (-17).  You (semiosis) are ranked 2 out of 137.
21:05 tg2 make him(her?) his own github account
21:05 semiosis lmao
21:06 tg2 lol rw-r ftl
21:06 semiosis i always come in second to JoeJulian
21:06 JoeJulian semiosis++
21:06 JoeJulian semiosis++
21:06 JoeJulian semiosis++
21:06 glusterbot JoeJulian: semiosis's karma is now 2000012
21:06 JoeJulian semiosis++
21:06 glusterbot JoeJulian: semiosis's karma is now 2000013
21:06 glusterbot JoeJulian: semiosis's karma is now 2000014
21:06 glusterbot JoeJulian: semiosis's karma is now 2000015
21:06 tg2 we'll get there
21:06 tg2 eventually
21:07 JoeJulian I'll set up a script to spam that later.
21:07 semiosis JoeJulian: watch out for flood control
21:07 tg2 karma records are prefixed by channel I presume
21:07 tg2 if you pm glusterbot with karma request will he comply
21:07 tg2 or is it one key/val across the entire bot
21:09 tg2 so who knows why my 3.6.1 libs stay around (and get used by 3.6.2 binary) after removing from apt and building 3.6.2 from source\
21:09 tg2 glusterfs --version shows 3.6.2
21:10 tg2 but if I lsof while running it's using /usr/lib/xxx/gluster/3.6.1/ libs
21:10 tg2 oh well at least i'm learning about repo packaging.
21:12 DV joined #gluster
21:13 JoeJulian rpm -q --whatprovides /usr/lib/xxx/gluster/3.6.1 ... except you're not using rpms so it's not my problem. ;)
21:18 DV joined #gluster
21:24 semiosis tg2: i can not reproduce that problem.  i installed glusterfs-server 3.6.2 on a clean VM, then stopped glusterd & purged it, and /usr/lib/x86_64-linux-gnu/glusterfs is gone.
21:25 semiosis maybe just lazy typing but notice you said  /usr/lib/xxx/gluster/3.6.1/ instead of  /usr/lib/xxx/glusterFS/3.6.1/
21:36 ilde joined #gluster
21:38 sage_ joined #gluster
21:39 shaunm_ joined #gluster
22:44 DV joined #gluster
23:09 kripper left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary