Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 ppai joined #gluster
00:04 plarsen joined #gluster
00:19 ShaunR how can i tell glusterfs to stop listeing on all interfaces, i want to limit it to a single interface
00:19 ShaunR I added 'option transport.socket.bind-address 10.0.1.1' to /etc/glusterfs/glusterd.vol which made glusterd only listen on that interface, but glusterfs still listening globally
01:00 DV joined #gluster
01:17 gildub joined #gluster
01:38 JoeJulian ShaunR: There's no capability to do that yet.
01:38 JoeJulian Workarounds would be to run it in a netns or just carefully filter your iptables rules.
01:41 aaronott joined #gluster
01:47 harish joined #gluster
01:57 nangthang joined #gluster
02:16 edwardm61 joined #gluster
02:17 hamiller joined #gluster
02:28 zerick joined #gluster
03:02 sakshi joined #gluster
03:16 rjoseph joined #gluster
03:33 hagarth joined #gluster
03:37 [7] joined #gluster
03:41 kanagaraj joined #gluster
03:41 rwheeler joined #gluster
03:41 bharata-rao joined #gluster
03:43 David_H__ joined #gluster
03:54 atinmu joined #gluster
04:10 kdhananjay joined #gluster
04:17 jiffin joined #gluster
04:17 shubhendu joined #gluster
04:31 DV joined #gluster
04:32 spandit joined #gluster
04:33 rafi joined #gluster
04:35 schandra joined #gluster
04:41 deepakcs joined #gluster
04:55 pppp joined #gluster
04:58 shubhendu joined #gluster
04:58 ndarshan joined #gluster
04:59 David_H_Smith joined #gluster
05:07 gem joined #gluster
05:11 hgowtham joined #gluster
05:20 Bhaskarakiran joined #gluster
05:26 atalur joined #gluster
05:39 ashiq joined #gluster
05:44 hagarth joined #gluster
05:45 rafi1 joined #gluster
05:53 dusmant joined #gluster
05:54 overclk joined #gluster
05:54 SOLDIERz joined #gluster
06:01 aravindavk joined #gluster
06:15 rafi joined #gluster
06:21 saurabh_ joined #gluster
06:21 poornimag joined #gluster
06:29 jvandewege joined #gluster
06:41 liquidat joined #gluster
06:44 raghu joined #gluster
06:45 LebedevRI joined #gluster
06:47 dusmant joined #gluster
06:47 Anjana joined #gluster
06:48 aravindavk joined #gluster
06:48 bharata_ joined #gluster
06:52 spalai joined #gluster
06:54 spalai1 joined #gluster
07:01 jcastill1 joined #gluster
07:04 spandit joined #gluster
07:05 kshlm joined #gluster
07:07 jcastillo joined #gluster
07:12 ju5t joined #gluster
07:20 [Enrico] joined #gluster
07:21 fsimonce joined #gluster
07:22 dusmant joined #gluster
07:27 kdhananjay joined #gluster
07:40 corretico joined #gluster
07:40 glusterbot News from resolvedglusterbugs: [Bug 1217927] [Backup]: Crash observed when multiple sessions were created for the same volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217927>
07:40 glusterbot News from resolvedglusterbugs: [Bug 1219457] [Backup]: Packages to be installed for glusterfind api to work <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219457>
07:40 glusterbot News from resolvedglusterbugs: [Bug 1218166] [Backup]: User must be warned while running the 'glusterfind pre' command twice without running the post command <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218166>
07:43 SOLDIERz joined #gluster
07:53 haomaiwang joined #gluster
07:59 ctria joined #gluster
08:04 ctria joined #gluster
08:10 glusterbot News from resolvedglusterbugs: [Bug 1219848] Directories are missing on the mount point after attaching tier to distribute replicate volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219848>
08:17 prilly_ joined #gluster
08:24 anrao joined #gluster
08:28 hgowtham joined #gluster
08:29 soumya joined #gluster
08:29 Norky joined #gluster
08:32 jcastill1 joined #gluster
08:37 jcastillo joined #gluster
08:41 DV joined #gluster
08:43 smohan joined #gluster
08:43 kdhananjay joined #gluster
08:44 DV joined #gluster
08:46 Saravana joined #gluster
08:48 dusmant joined #gluster
08:52 gem joined #gluster
08:53 nishanth joined #gluster
09:04 ashiq joined #gluster
09:04 [Enrico] joined #gluster
09:05 DV joined #gluster
09:06 aravindavk joined #gluster
09:11 glusterbot News from resolvedglusterbugs: [Bug 1192378] Disperse volume: client crashed while running renames with epoll enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192378>
09:11 glusterbot News from resolvedglusterbugs: [Bug 1219358] Disperse volume: client crashed while running iozone <https://bugzilla.redhat.co​m/show_bug.cgi?id=1219358>
09:11 glusterbot News from resolvedglusterbugs: [Bug 1203637] Disperse volume: glfsheal crashed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203637>
09:11 glusterbot News from resolvedglusterbugs: [Bug 1218031] GlusterD crashes on NetBSD when running mgmt_v3-locks.t test <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218031>
09:11 glusterbot News from resolvedglusterbugs: [Bug 1197253] NFS logs are filled with system.posix_acl_access messages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1197253>
09:11 glusterbot News from resolvedglusterbugs: [Bug 1192435] server crashed during rebalance in dht_selfheal_layout_new_directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192435>
09:14 gem joined #gluster
09:17 atalur joined #gluster
09:19 kdhananjay joined #gluster
09:22 anil joined #gluster
09:26 ashiq joined #gluster
09:28 dusmant joined #gluster
09:31 glusterbot News from newglusterbugs: [Bug 1221980] bitd log grows rapidly if brick goes down <https://bugzilla.redhat.co​m/show_bug.cgi?id=1221980>
09:32 merlink joined #gluster
09:34 ira joined #gluster
09:40 merlink joined #gluster
09:41 glusterbot News from resolvedglusterbugs: [Bug 1216942] glusterd crash when snapshot create was in progress on different volumes at the same time - job edited to create snapshots at the given time <https://bugzilla.redhat.co​m/show_bug.cgi?id=1216942>
09:41 glusterbot News from resolvedglusterbugs: [Bug 1218857] Clean up should not empty the contents of  the global config file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218857>
09:41 glusterbot News from resolvedglusterbugs: [Bug 1199352] GlusterFS 3.7.0 tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1199352>
09:41 glusterbot News from resolvedglusterbugs: [Bug 1218554] libgfapi : Anonymous fd support in gfapi <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218554>
09:44 smohan joined #gluster
09:45 jiffin1 joined #gluster
09:52 atalur joined #gluster
09:59 atinmu joined #gluster
10:11 glusterbot News from resolvedglusterbugs: [Bug 1207712] Input/Output error with disperse volume when geo-replication is started <https://bugzilla.redhat.co​m/show_bug.cgi?id=1207712>
10:11 glusterbot News from resolvedglusterbugs: [Bug 1207967] heal doesn't work for new volumes to reflect the 128 bits changes in quota after upgrade <https://bugzilla.redhat.co​m/show_bug.cgi?id=1207967>
10:13 jvandewege joined #gluster
10:20 wkf joined #gluster
10:23 nsoffer joined #gluster
10:28 corretico joined #gluster
10:30 jiffin joined #gluster
10:31 glusterbot News from newglusterbugs: [Bug 1217589] glusterd crashed while schdeuler was creating snapshots when bit rot was enabled on the volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217589>
10:31 glusterbot News from newglusterbugs: [Bug 1217322] Disperse volume: Transport endpoint not connected in nfs log messages though the volume is started <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217322>
10:31 glusterbot News from newglusterbugs: [Bug 1217372] Disperse volume: NFS client mount point hung after the bricks came back up <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217372>
10:31 glusterbot News from newglusterbugs: [Bug 1216940] Disperse volume: glusterfs crashed while testing heal <https://bugzilla.redhat.co​m/show_bug.cgi?id=1216940>
10:31 glusterbot News from newglusterbugs: [Bug 1214629] RFE: Remove disperse-data option in the EC volume creation command <https://bugzilla.redhat.co​m/show_bug.cgi?id=1214629>
10:41 glusterbot News from resolvedglusterbugs: [Bug 1215173] Disperse volume: rebalance and quotad crashed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1215173>
10:42 Micromus joined #gluster
10:42 atalur joined #gluster
10:42 Micromus Hi, is it safe to set up glusterfs over the internet? I see I can use SSL for data transport encryption, but how about control data?
10:47 jiffin joined #gluster
10:50 ProT-0-TypE joined #gluster
10:52 ju5t joined #gluster
10:57 atinmu joined #gluster
11:00 dusmant joined #gluster
11:01 glusterbot News from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058300>
11:04 aravindavk joined #gluster
11:05 Micromus the SSL data encryption is only for client-cluster comms it seems, so there is no encryption availble intra-cluster??
11:07 jiffin1 joined #gluster
11:08 ndevos Micromus: see https://kshlm.in/network-encryption-in-glusterfs/ for the most recent post about it
11:08 Slashman joined #gluster
11:16 bene2 joined #gluster
11:28 plarsen joined #gluster
11:31 glusterbot News from newglusterbugs: [Bug 1224243] xlators: fix allocation of zero bytes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224243>
11:41 glusterbot News from resolvedglusterbugs: [Bug 1208067] [SNAPSHOT]: Snapshot create fails while using scheduler to create snapshots <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208067>
11:41 glusterbot News from resolvedglusterbugs: [Bug 1208097] [SNAPSHOT] : Appending time stamp to snap name while using scheduler to create snapshots should be removed. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208097>
11:41 glusterbot News from resolvedglusterbugs: [Bug 1218585] [Snapshot] Snapshot scheduler show status disable even when it is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218585>
11:54 atalur joined #gluster
11:57 gem joined #gluster
11:57 nishanth joined #gluster
12:48 corretico joined #gluster
12:48 wkf joined #gluster
12:49 Micromus How do I make the apparently obligatory RPC services only listen on an internal interface?
12:50 Micromus After I installed glusterfs on ubuntu, a shitload of RPC stuff is now listening on *
12:52 prilly_ joined #gluster
12:53 ira joined #gluster
12:58 jayunit1000 joined #gluster
12:59 hagarth joined #gluster
12:59 chirino joined #gluster
13:00 jiffin joined #gluster
13:01 glusterbot News from newglusterbugs: [Bug 1224290] peers connected in the middle of a transaction are participating in the transaction <https://bugzilla.redhat.co​m/show_bug.cgi?id=1224290>
13:05 jiffin1 joined #gluster
13:26 nsoffer joined #gluster
13:26 merlink joined #gluster
13:35 dgandhi joined #gluster
13:39 jcastill1 joined #gluster
13:40 pdrakeweb joined #gluster
13:40 coredump joined #gluster
13:44 jcastillo joined #gluster
13:50 georgeh-LT2 joined #gluster
13:59 firemanxbr joined #gluster
13:59 merlink joined #gluster
14:06 aaronott joined #gluster
14:09 wushudoin joined #gluster
14:09 wushudoin joined #gluster
14:14 vimal joined #gluster
14:17 crashmag joined #gluster
14:19 prilly_ joined #gluster
14:30 kdhananjay joined #gluster
14:30 bturner_ joined #gluster
14:31 bturner_ joined #gluster
14:35 harish joined #gluster
14:43 shaunm_ joined #gluster
14:52 atinmu joined #gluster
14:57 gluster-user joined #gluster
14:58 gluster-user Is anyone aware of a bug with rebalance in the new 3.7 version of gluster?
15:00 p8952 joined #gluster
15:01 ndevos gluster-user: these are the reported bugs against 3.7: https://bugzilla.redhat.com/buglist.cgi?f1=bug_sta​tus&amp;o1=notequals&amp;product=GlusterFS&amp;que​ry_format=advanced&amp;v1=CLOSED&amp;version=3.7.0
15:02 ndevos there seem to be some rebalance bugs in there...
15:04 Gill joined #gluster
15:06 gluster-user yeah, the .trashcan seems to be causing some sort of rebalance issue on two of my nodes
15:06 gluster-user fix layout on /.trashcan/internal_op failed
15:07 gluster-user I even turned off the trashcan feature, but no luck
15:13 atalur joined #gluster
15:16 nangthang joined #gluster
15:30 deniszh joined #gluster
15:30 corretico joined #gluster
15:42 atinmu joined #gluster
15:43 Prilly joined #gluster
15:53 JoeJulian Micromus: You cannot (currently) change the listeners from 0.0.0.0. Your options are to (possibly) use netns to contain gluster to a specific net namespace, or to use a firewall/iptables.
15:57 atalur joined #gluster
16:29 atalur_ joined #gluster
16:32 hoglet joined #gluster
16:37 xiu b 4
16:38 JoeJulian miss
16:41 rwheeler joined #gluster
16:42 xiu :(
16:43 ShaunR any of you guys using gluster for a web app cluster that has alot of small files.
16:44 ShaunR i'm seeing pretty poor performance from gluster, the site is 10x slower than it was when the fs was local
16:44 bturner_ ShaunR, 3.7 has some smallfile enhancements, what version you running?
16:46 Rapture joined #gluster
16:48 ekuric joined #gluster
16:50 ShaunR I'm running 3.7
16:54 ShaunR i'm mounting the glusterfs using fuse/glusterfs... i'm wondering if i should try nfs
17:32 coredump joined #gluster
17:44 ppai joined #gluster
17:47 XpineX joined #gluster
17:52 hagarth joined #gluster
17:52 rafi joined #gluster
17:54 jbrooks joined #gluster
17:59 ProT-0-TypE joined #gluster
18:03 plarsen joined #gluster
18:07 _Bryan_ joined #gluster
18:10 steveeJ joined #gluster
18:10 ppai joined #gluster
18:10 ppai joined #gluster
18:23 Saravana joined #gluster
18:24 Gill_ joined #gluster
18:45 shaunm_ joined #gluster
18:47 ekuric left #gluster
19:16 CyrilPeponnet @ShaunR https://joejulian.name/blog/optimiz​ing-web-performance-with-glusterfs/
19:53 ppai joined #gluster
20:06 hasan joined #gluster
20:06 hasan hi all. I'm analyzing the traffic of glusterfs to learn something about the protocols of it.
20:07 hasan and what I encountered was the fact that the client sends WRITE calls to each server node (the amount of nodes the servers know each other as peers)
20:08 hasan I was suprised. this means that the client is responsible for the replication of a gluster replica volume
20:08 hasan and not (like I first thought) the server between each other.
20:14 Prilly joined #gluster
20:18 hoglet Yes, the clients are managing replication. It is different.
20:35 hasan hoglet: it seems the client does not tell the server (who was offline) what files are in the volume
20:35 hasan it only tells the server which came back online which volume it is using.
20:36 hasan and I guess the afterwards replication is handled between serverA and serverB.
20:36 hoglet I do not think so.
20:36 hasan the client is involved again?
20:36 hoglet Note, I am not an expert. I have just read the documentation
20:37 hoglet Yes, clients are triggering the sync
20:37 hoglet It is different
20:38 hasan hoglet: no they don't
20:38 hasan I just checked.
20:38 hoglet Then I have missread
20:39 hasan if a peer goes down and comes back again. the client sends the information about the volumes being present. and says "hey, the other peer did come back again". "volume is /path/whatever" with some more information.
20:39 hasan then the replication/syncing is done between the peers
20:40 hoglet Still, the client is needed as it triggers the sync
20:40 hasan and in detail. the one peer lists all the files currently present in the volume. the other peer who came back does the process of deleting/adding missing files.
20:40 hasan hoglet: right.
20:45 rotbeard joined #gluster
20:46 JoeJulian hasan, hoglet: The client *may* perform the self-heal, but if it does not (by accessing a file that needs healed) the self-heal daemon (glustershd) does.
20:47 hoglet Ah, that sounds better
20:51 hasan JoeJulian: right now I'm thinking of a middle layer between the mounted volume and another host writing into it. let's assume we had something like DRBD in between the mount of srvA (which is the fuse client mounted the volume of a remote gluster server)
20:51 hasan and a host with is the other endpoint of that DRBD.
20:51 hasan since we know the I/O is not dependend on the finishing of the writing (with DRBD) this means the host could write and forget. (without performance bottlenecks like the network)
20:52 hasan DRBD buffers and writes until it finishes in the chain (mounted volume to gluster server)
20:52 hasan would this be possible to enhance performance? acting as a kind of cache.
20:52 hasan ?
20:54 hasan so: glusterd -> srvA fuse /mnt/volume -><- DRBD <-> another host on network performing writes
20:56 hasan the "another host" has no performance issues whatsoever when writing since he just copies into DRBD (or its cache).
20:58 hasan s/copies/writes/
20:58 glusterbot What hasan meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
21:07 hefn joined #gluster
21:07 hefn greetings
21:17 TheSeven joined #gluster
21:21 hefn left #gluster
21:24 JoeJulian hasan: When drbd is mentioned, I go in to rage mode and can't think clearly....
21:25 JoeJulian If you want a write cache, just enable it. :)
21:31 hasan JoeJulian: heh, why rage mode? it has its own reason for existence
21:34 hasan JoeJulian: glusterfs doesn't have a write cache
21:40 lexi2 joined #gluster
21:45 daMaestro joined #gluster
21:51 JoeJulian hasan: because I spent a month trying to piece together data from separate drives after drbd trashed my filesystem once.... never again.
21:51 JoeJulian performance.write-behind
21:52 JoeJulian "gluster volume set help" and look for performance.write-behind
21:53 shaunm_ joined #gluster
21:53 hasan JoeJulian: thanks
22:05 lexi2 joined #gluster
22:09 capri joined #gluster
22:26 prilly_ joined #gluster
23:15 wkf joined #gluster
23:47 edwardm61 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary