Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 gildub joined #gluster
00:25 glusterbot News from resolvedglusterbugs: [Bug 893795] Gluster 3.3.1 won't compile on Freebsd <https://bugzilla.redhat.com/show_bug.cgi?id=893795>
00:30 B21956 joined #gluster
00:39 cyberswat joined #gluster
01:00 MugginsM joined #gluster
01:56 harish joined #gluster
02:03 Lee1092 joined #gluster
02:06 nangthang joined #gluster
02:09 nangthang joined #gluster
02:34 DV joined #gluster
02:48 autoditac joined #gluster
03:15 TheSeven joined #gluster
03:17 bharata-rao joined #gluster
03:19 aaronott joined #gluster
03:23 MugginsM joined #gluster
03:25 schandra joined #gluster
03:51 sripathi joined #gluster
03:52 itisravi joined #gluster
03:52 spandit joined #gluster
03:55 shubhendu joined #gluster
04:01 atinm joined #gluster
04:02 ekuric joined #gluster
04:11 MugginsM joined #gluster
04:12 soumya joined #gluster
04:21 RameshN joined #gluster
04:30 MugginsM joined #gluster
04:32 yazhini joined #gluster
04:35 nbalacha joined #gluster
04:44 vimal joined #gluster
04:48 sakshi joined #gluster
04:53 ramteid joined #gluster
05:00 gem joined #gluster
05:02 RameshN joined #gluster
05:04 rafi joined #gluster
05:07 jiffin joined #gluster
05:10 ndarshan joined #gluster
05:11 smohan joined #gluster
05:12 pppp joined #gluster
05:13 hagarth joined #gluster
05:16 deepakcs joined #gluster
05:21 soumya_ joined #gluster
05:31 sripathi joined #gluster
05:36 autoditac joined #gluster
05:37 rjoseph joined #gluster
05:41 Manikandan joined #gluster
05:42 hgowtham joined #gluster
05:45 ashiq joined #gluster
05:47 dusmant joined #gluster
05:49 Bhaskarakiran joined #gluster
05:50 kanagaraj joined #gluster
05:54 kdhananjay joined #gluster
05:57 aravindavk joined #gluster
06:02 davidself joined #gluster
06:03 vmallika joined #gluster
06:04 deepakcs joined #gluster
06:05 overclk joined #gluster
06:10 Manikandan joined #gluster
06:17 jiffin joined #gluster
06:18 shubhendu joined #gluster
06:19 ndarshan joined #gluster
06:19 dusmant joined #gluster
06:21 jwd joined #gluster
06:21 byreddy joined #gluster
06:22 jiffin left #gluster
06:25 glusterbot News from newglusterbugs: [Bug 1245036] glusterd fails to peer probe if one of the node is behind the NAT. <https://bugzilla.redhat.com/show_bug.cgi?id=1245036>
06:25 jtux joined #gluster
06:26 maveric_amitc_ joined #gluster
06:31 jiffin joined #gluster
06:39 prg3 joined #gluster
06:41 kshlm joined #gluster
06:44 anil_ joined #gluster
06:54 raghu joined #gluster
06:54 Manikandan joined #gluster
06:55 dusmant joined #gluster
06:55 glusterbot News from newglusterbugs: [Bug 1245045] Data Loss:Remove brick commit passing when remove-brick process has not even started(due to killing glusterd) <https://bugzilla.redhat.com/show_bug.cgi?id=1245045>
07:01 karnan joined #gluster
07:16 soumya_ joined #gluster
07:20 dusmant joined #gluster
07:23 Manikandan joined #gluster
07:24 topshare joined #gluster
07:25 glusterbot News from newglusterbugs: [Bug 1245065] "rm -rf *" from multiple mount points fails to remove directories on all the subvolumes <https://bugzilla.redhat.com/show_bug.cgi?id=1245065>
07:28 arcolife joined #gluster
07:31 [Enrico] joined #gluster
07:31 meghanam joined #gluster
07:35 atalur joined #gluster
07:36 nangthang joined #gluster
07:53 kxseven joined #gluster
07:53 shubhendu joined #gluster
07:54 ndarshan joined #gluster
07:55 glusterbot News from newglusterbugs: [Bug 1173437] [RFE] changes needed in snapshot info command's xml output. <https://bugzilla.redhat.com/show_bug.cgi?id=1173437>
07:55 glusterbot News from newglusterbugs: [Bug 1161416] snapshot delete all command fails with --xml option. <https://bugzilla.redhat.com/show_bug.cgi?id=1161416>
07:55 glusterbot News from newglusterbugs: [Bug 1161424] [RFE]  snapshot configuration should have help option. <https://bugzilla.redhat.com/show_bug.cgi?id=1161424>
07:55 glusterbot News from newglusterbugs: [Bug 1203185] Detached node list stale snaps <https://bugzilla.redhat.com/show_bug.cgi?id=1203185>
07:56 soumya_ joined #gluster
07:57 glusterbot News from resolvedglusterbugs: [Bug 1205592] [glusterd-snapshot] - Quorum must be computed using existing peers <https://bugzilla.redhat.com/show_bug.cgi?id=1205592>
07:57 glusterbot News from resolvedglusterbugs: [Bug 1184393] [SNAPSHOT]: glusterd server quorum check is broken for snapshot commands. <https://bugzilla.redhat.com/show_bug.cgi?id=1184393>
08:03 Manikandan joined #gluster
08:04 sripathi joined #gluster
08:05 topshare joined #gluster
08:09 topshare joined #gluster
08:10 ctria joined #gluster
08:10 topshare joined #gluster
08:20 ghenry joined #gluster
08:32 Bhaskarakiran joined #gluster
08:34 harish joined #gluster
08:35 topshare joined #gluster
08:35 6A4ADAHTC joined #gluster
08:36 dusmant joined #gluster
08:37 haomaiwa_ joined #gluster
08:38 kxseven joined #gluster
08:41 Pupeno joined #gluster
08:42 topshare joined #gluster
08:51 gem joined #gluster
08:54 shubhendu joined #gluster
09:03 dusmant joined #gluster
09:05 kshlm joined #gluster
09:06 jiffin joined #gluster
09:08 meghanam joined #gluster
09:10 kdhananjay joined #gluster
09:12 gem joined #gluster
09:13 jwaibel joined #gluster
09:17 soumya joined #gluster
09:17 shubhendu joined #gluster
09:22 topshare joined #gluster
09:22 atinm joined #gluster
09:26 Slashman joined #gluster
09:32 deniszh joined #gluster
09:32 nsoffer joined #gluster
09:36 soumya joined #gluster
09:40 atalur joined #gluster
09:43 kdhananjay1 joined #gluster
09:47 paraenggu joined #gluster
09:49 kovshenin joined #gluster
09:54 victori joined #gluster
09:58 atinm joined #gluster
10:00 dusmant joined #gluster
10:06 jwaibel morning
10:08 jwaibel is there a failover solution for nfs mounted volumes?
10:10 Jitendra joined #gluster
10:10 gildub joined #gluster
10:13 kshlm joined #gluster
10:17 ndevos jwaibel: for that you need ip-address failover, you can manage virtual ip-addresses with ctdb or pacemaker for that
10:18 jwaibel ok that i can do
10:18 ndevos there are other solutions too, like vrrp/carp/..., but ctdb and pacemaker are most commonly used
10:18 akay1 joined #gluster
10:19 jwaibel i hoped there would be an easy way like the backupvolfile option
10:20 kkeithley1 joined #gluster
10:20 jwaibel is there any performance comaprison between the gluster native or nfs options?
10:22 ndevos some nfs clients allow specifying multiple servers (ip-addresses), but the linux nfs-client does not have such functionality
10:22 jwaibel then that would be no option because my setup is bound to linux atm
10:22 ndevos I'm not sure if anyone published performance comparisons between glusterfs and nfs protocols, it is highly dependent on the workload you are having too
10:23 jwaibel kk
10:23 ndevos the glusterfs protocol does the distribution, whereas nfs connects to one server and the distribution is done there (like a proxy)
10:24 ndevos so, if bandwidth would be the bottleneck in your environment, mounting with glusterfs could perform better
10:24 jwaibel that so far is not my issue.
10:24 ndevos on the other hand, the nfs-client has more targetted optimizations towards caching than fuse has, some workloads can benefit from that
10:24 jwaibel i will need multiple clients be able to access the same data same time. with about 80/20 read/write.
10:25 ndevos the same data, does that mean the same file?
10:25 jwaibel yes
10:25 jwaibel its about a web application with loadbalanced servers
10:25 glusterbot News from newglusterbugs: [Bug 1245142] DHT-rebalance: Rebalance hangs on distribute volume when glusterd is stopped on peer node <https://bugzilla.redhat.com/show_bug.cgi?id=1245142>
10:26 ndevos if the application uses locking correctly, you should probably use the glusterfs/fuse protocol
10:26 cleong joined #gluster
10:27 ndevos gluster/nfs likely works too though, but failover with locks could be more tricky
10:28 jwaibel i will try the options
10:28 ndevos or, you could use nfs-ganesha, we're putting a lot of effort in making that high-available
10:29 ndevos http://gluster.readthedocs.org/en/latest/Features/glusterfs_nfs-ganesha_integration/ contains some of the details
10:29 jwaibel oki marked for reading
10:30 jwaibel but first to finish my ansible roles for the current stuff :P
10:34 jiffin joined #gluster
10:35 jwaibel hmm that reads promessing
10:38 soumya_ joined #gluster
10:42 jwaibel is there any benefit to have more than 2 nodes in a replication setup?
10:45 bene joined #gluster
10:45 ejnersan joined #gluster
10:47 LebedevRI joined #gluster
10:59 ejnersan joined #gluster
11:09 autoditac joined #gluster
11:12 atinm joined #gluster
11:14 ira joined #gluster
11:16 dusmant joined #gluster
11:25 ejnersan hi
11:25 glusterbot ejnersan: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:27 ejnersan I have a gluster server with 3 ip's. one is for general lan, one is a bond and last is a 10gbe
11:28 ejnersan I want the clients to connect via the bond ip, but the data is transmitted through the lan interface
11:29 ejnersan can I set the server to only transmit through the bond and 10gbe interfaces?
11:31 jwaibel yes you can set to use a specific interface
11:35 itisravi joined #gluster
11:36 ejnersan jwaibel: is it through bind-address?
11:36 jwaibel that binds gluster to a certain address. hold on i just look for the option
11:37 ejnersan ok thx
11:41 topshare joined #gluster
11:42 meghanam joined #gluster
11:44 deepakcs joined #gluster
11:45 nsoffer joined #gluster
11:46 firemanxbr joined #gluster
11:56 glusterbot News from newglusterbugs: [Bug 1243644] Metadata self-heal is not handling failures while heal properly <https://bugzilla.redhat.com/show_bug.cgi?id=1243644>
11:56 glusterbot News from newglusterbugs: [Bug 1243647] Disperse volume : data corruption with appending writes in 8+4 config <https://bugzilla.redhat.com/show_bug.cgi?id=1243647>
11:56 glusterbot News from newglusterbugs: [Bug 1243648] Disperse volume: NFS crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1243648>
11:56 soumya_ joined #gluster
11:57 jiffin joined #gluster
11:57 RameshN joined #gluster
11:57 itisravi REMINDER: Gluster Community Bug Triage meeting starting in a few minutes in #gluster-meeting
12:01 rafi1 joined #gluster
12:02 dusmant joined #gluster
12:03 atinm joined #gluster
12:06 hagarth joined #gluster
12:06 sakshi joined #gluster
12:08 leucos joined #gluster
12:08 devilspgd joined #gluster
12:10 jtux joined #gluster
12:11 jrm16020 joined #gluster
12:15 nangthang joined #gluster
12:16 overclk joined #gluster
12:21 jwaibel ejnersan: i got that wrong.
12:22 ejnersan ok?
12:22 jwaibel been thinking in the context of another tool
12:22 ejnersan oh
12:23 ejnersan so no way to control which interfaces to use :-7
12:24 jwaibel option transport.socket.bind-address 192.168.1.10
12:24 jwaibel that would bind to a certain interface at least
12:25 ejnersan hmm.. need to use both bond and 10gbe
12:25 ejnersan what about allow.host ?
12:25 ejnersan *auth.allow
12:25 jwaibel are all nodes on same vlan/network?
12:26 glusterbot News from newglusterbugs: [Bug 1243384] EC volume: Replace bricks is not healing version of root directory <https://bugzilla.redhat.com/show_bug.cgi?id=1243384>
12:26 glusterbot News from newglusterbugs: [Bug 1243108] bash tab completion fails with "grep: Invalid range end" <https://bugzilla.redhat.com/show_bug.cgi?id=1243108>
12:26 ejnersan yes
12:26 jwaibel when you add peers you coudl specify a certain ip for the peer adress
12:27 jwaibel so maybe you just add addresses on you 10gbe interface foer the peers
12:27 ejnersan I'm thinking more about the workstation clients, than gluster nodes.
12:28 jwaibel use different hostnames on both interfaces so you can either use the hostname for the 10gbe interface or the other
12:28 ejnersan yeah I did that, but data is still transmitted through another nic
12:29 jwaibel i am to new to gluster to know more yet.
12:29 ejnersan well thanks anyway jwaibel
12:32 jwaibel np. i need to find a solution for that myself now you asked :P
12:34 ejnersan I thought it would be straight forward too, until I noticed that no data was sent through the bond interface.
12:38 jwaibel https://gluster.readthedocs.org/en/latest/Feature%20Planning/GlusterFS%204.0/Split%20Network/
12:38 jwaibel i just found this in the docu
12:39 jwaibel so its at least a planned feature
12:39 ejnersan ok that's something
12:41 ejnersan thanks for the link
12:42 rafi joined #gluster
12:42 kxseven joined #gluster
12:47 paraenggu left #gluster
12:55 plarsen joined #gluster
12:55 nbalacha joined #gluster
12:57 chirino joined #gluster
12:59 [Enrico] joined #gluster
13:00 shaunm__ joined #gluster
13:07 R0ok__ joined #gluster
13:09 B21956 joined #gluster
13:14 pppp joined #gluster
13:19 pdrakeweb joined #gluster
13:19 julim joined #gluster
13:19 soumya joined #gluster
13:21 dusmant joined #gluster
13:22 shaunm_ joined #gluster
13:23 Gill joined #gluster
13:23 dgandhi joined #gluster
13:24 ejnersan left #gluster
13:25 mpietersen joined #gluster
13:28 aaronott joined #gluster
13:30 georgeh-LT2 joined #gluster
13:33 Twistedgrim joined #gluster
13:43 soumya_ joined #gluster
13:43 kampnerj joined #gluster
13:44 hamiller joined #gluster
13:49 jon__ joined #gluster
13:49 kovsheni_ joined #gluster
13:49 B21956 joined #gluster
13:52 dijuremo joined #gluster
13:52 nbalacha joined #gluster
13:54 ejnersan joined #gluster
13:57 ejnersan If I mount a gluster-volume locally on the server, and then re-export that via nfs4, running 'mount' on the client shows it mounted as nfs3. How come?
14:00 Gill joined #gluster
14:00 ndevos ejnersan: uh, its not very common to mount and re-export it over nfs, I think there are some issues with that
14:01 ndevos ejnersan: I think you might still have gluster/nfs running (only NFSv3), so you would need to disable that before exporting over nfsv4
14:01 spcmastertim joined #gluster
14:01 ndevos "gluster volume set VOLUME nfs.disable true"
14:02 ndevos and you'll need to do that for all your volumes
14:02 ejnersan ndevos: I see. I'm trying to get around the issue that glusterfs can't send data through a specific nic when the server has multiple nics
14:03 ndevos ejnersan: you should be able to get that done by setting the appropriate routing rules
14:04 wushudoin joined #gluster
14:04 ndevos ejnersan: or, something like split-horizon dns, or changes to /etc/hosts and have the hostnames match the IP that is on the NIC you want
14:05 ejnersan I'm not super confident in routing rules.. :-/
14:05 ejnersan I have set the hostnames to match the IP, and mount using that hostname, but data is still send through another nic
14:06 ndevos is your volume created with hostnames or ip-addresses?
14:06 ndevos ~pasteinfo | ejnersan
14:06 glusterbot ejnersan: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
14:07 ejnersan ndevos: http://fpaste.org/246469/37487651/
14:08 ndevos ejnersan: ok, so in that case the hostname "sol" (without domain) needs to resolve to the right IP
14:09 ndevos ejnersan: the logic for accessing the volumes is done on the (glusterfs) client-side, the client receives the volume layout when it mounts
14:09 ejnersan ok. but gluster will always only send data through one IP, right? I need to send via 2
14:10 ndevos well, yes, one IP as source, and one as destination
14:11 jwaibel but if both nics would use a different ip network that should work
14:12 ejnersan thing is, my server has 3 IPs, 'sol' is the general lan, but I want to send data through a bonded interface and a 10gbe one
14:12 jwaibel all in same ip net?
14:12 jwaibel that i forgot to ask earlier
14:12 bene joined #gluster
14:12 ejnersan jwaibel: all nics are on different ip nets
14:13 ndevos how would the kernel decide what IP to use to connect from/to? I dont think it can do some kind of load balancing like that, surely it can not do that for one single connection
14:13 jwaibel it would happen on the ip level
14:13 jwaibel routing of packets depends on the network info set
14:14 ejnersan I didn't think it would be a problem. I can specify ip-net with nfs
14:14 ejnersan thought same applied to gluster
14:15 jwaibel if for example bonding interface would use 1.1.1.1/24 and 10gbe would use 2.2.2.2/24 there would be no flow between the nets without a router somewhere
14:15 ndevos well, gluster receives the volume layout (hostname:/bricks) when mounting, after that the gluster client resolved the hostname ("sol") and connects to that IP
14:16 jwaibel but when the sol hostname would roesolve to the correct (wanted) nic ip the problem would be solved
14:16 ndevos so, if you want your glusterfs clients to connect to the bond interface, you would need to setup dns or /etc/hosts to resolve "sol" to that NIC
14:16 dusmant joined #gluster
14:16 topshare joined #gluster
14:17 ejnersan ok. jwaibel helped me earlier, pointing to this link: https://gluster.readthedocs.org/en/latest/Feature%20Planning/GlusterFS%204.0/Split%20Network/
14:17 jwaibel that is a planned feature. not even sure of state
14:17 jwaibel i would not wait for that yetz myself.
14:17 ejnersan so that's why I wanted to try with nfs4 as a temp solution
14:17 jwaibel lets try to solve it on the ip level
14:18 ndevos Gluster 4.0 is still many months away
14:18 jwaibel make sure your client sees the server ip as the correct interface
14:18 ejnersan clients get host ips via dnsmasq
14:19 wushudoin joined #gluster
14:19 ndevos ejnersan: the key is that the client needs to resolve "sol" to the IP from the nic that you want to use
14:19 jwaibel you could override that for a test by adding a host entry on the client and add: order hosts,bind to your /etc/resolv.conf on the client
14:20 ejnersan jwaibel: I'll try
14:21 ejnersan jwaibel: ok, so far so good
14:22 bennyturns joined #gluster
14:22 ejnersan (ping test)
14:22 jwaibel brb need to watch a failing ansible role
14:26 jwaibel btw split dns/host info is not that easy to manage such services. i bet alot ppl will fail on that
14:27 jmarley joined #gluster
14:30 ejnersan joined #gluster
14:31 ejnersan jwaibel, ndevos: it works! adding the correct ip as  'sol
14:32 ejnersan 'sol' in hosts, and changing resolv.conf
14:32 ejnersan thank you!
14:33 kanagaraj joined #gluster
14:33 jwaibel great
14:33 Gill joined #gluster
14:34 jwaibel you can solve that with dnsmasq to, but thats another story. .P
14:34 ejnersan jwaibel: cool, I'll look into that :)
14:35 bennyturns joined #gluster
14:37 kanagaraj joined #gluster
14:42 ejnersan ok looks like I didn't have to change resolv.conf - just adding the ip/hostname to /etc/hosts works (probably nsswitch.conf in action..)
14:42 kanagaraj joined #gluster
14:43 bene joined #gluster
14:43 jwaibel i would not depend on that to work. make sure the names stay permanent
14:44 lpabon joined #gluster
14:44 jwaibel well anyway. i need a short break here before i start the next part of my ansible role
14:44 lkoranda joined #gluster
14:45 ejnersan thanks for helping me jwaibel :)
14:45 jwaibel np. i try to learn gluster myself. started to look at it yesterday.
14:46 ejnersan I'm pretty new to it myself, but think it is very promising (and generally easy :P)
14:47 jmarley joined #gluster
14:49 Gill_ joined #gluster
14:54 Gill joined #gluster
14:58 Gill_ joined #gluster
14:59 patrick__ Is there another way to avoid the 32-group limit while using ACLs besides setting ACLs on the bricks themselves? Setting the ACLs on the bricks doesn't seem to work on version 3.6.3.
15:00 Twistedgrim1 joined #gluster
15:04 social_ joined #gluster
15:09 dad264 joined #gluster
15:09 sankarsh` joined #gluster
15:15 maveric_amitc_ joined #gluster
15:15 vimal joined #gluster
15:16 ndevos patrick__: ah! you sent an email about that earlier right? I wanted to try it out, but got busy elsewhere...
15:16 ndevos patrick__: I think you will not see the ACL from the client, but it should be enforced on the bricks
15:16 B21956 joined #gluster
15:17 ghenry joined #gluster
15:17 patrick__ it's not when you make a new file on the client
15:17 ndevos oh, hmpf
15:17 patrick__ but it is when you make a new file on the brick
15:19 ndevos uh... thats interesting, the ACL should get inherited on the bricks, sounds like the posix-acl xlator is broken :-/
15:20 ndevos I wanted to rip out the posix-acl completely, it gives a lot of trouble for little to no benefit...
15:20 patrick__ thanks for not
15:21 ndevos well, ACLs should be done on a system dependent way, and the posix-acl xlator does not allow that either, we can solve it better
15:22 ndevos but for your immediate needs, I think we could address the 32-group limit first, and then rip out everything to get a more portable solution
15:29 jdossey joined #gluster
15:36 victori joined #gluster
15:39 patrick__ I'd really appreciate that. Any solution in the meantime?
15:40 patrick__ try to use less than 32 groups and mount with ACLs?
15:40 sadbox joined #gluster
15:40 jiffin joined #gluster
15:41 ndevos patrick__: yeah, there is not really an option for that now I think, maybe NFSv3 ACLs could help
15:42 patrick__ ok. Thank you ndevos. I really appreciate you taking the time to look at it.
15:42 ndevos patrick__: how would you call a mount option for fuse that resolves all the groups?
15:42 patrick__ ?
15:42 cholcombe joined #gluster
15:42 ndevos patrick__: I'm thinking to add an other mount option, like "acl", and if that option is set, call getgroups() to get all groups
15:43 ndevos patrick__: at the moment, fuse reads /proc/$PID/stats for the groups, and that lists only the first 32
15:43 _Bryan_ joined #gluster
15:43 ndevos uh, /proc/$PID/status
15:43 patrick__ gotcha. will that work? just pulling down the groups afterwards?
15:44 _dist joined #gluster
15:46 ndevos I think it will be needed to pull groups for the uid in that file...
15:47 ndevos if the process is running with a different set of groups than the uid has, it'll be difficult to detect
15:47 l0uis after adding a new brick, do I need to specify fix-layout on the rebalance so gluster will migrate files to it?
15:50 l0uis nevermind, it seems not.
15:54 ndevos maybe newer versions do the fix-layout automatically?
15:58 patrick__ I haven't been using gluster very long and am unfamiliar with the development style. I assume critical fixes go to older versions and new features only to newer versions? (read: No chance of backporting that to 3.6, is there?)
16:00 patrick__ These are production storage systems. We watch the release logs for the right time to move forward, but are very cautious.
16:03 LebedevRI joined #gluster
16:04 l0uis ndevos: Yeah I dunno. Can't find an answer in the docs. w/o fix-layout I see data showing up on the brick though sooooo...
16:08 soumya_ joined #gluster
16:08 theusualsuspect joined #gluster
16:12 Gill_ joined #gluster
16:13 victori joined #gluster
16:15 bennyturns joined #gluster
16:21 calavera joined #gluster
16:42 maveric_amitc_ joined #gluster
16:51 Gill joined #gluster
16:55 rafi joined #gluster
16:56 Gill joined #gluster
16:57 jiffin joined #gluster
17:02 dup joined #gluster
17:03 julim_ joined #gluster
17:16 Gill joined #gluster
17:17 lpabon joined #gluster
17:23 Gill_ joined #gluster
17:24 victori joined #gluster
17:25 mpietersen joined #gluster
17:30 _maserati joined #gluster
17:40 Rapture joined #gluster
17:47 Gill joined #gluster
18:07 unclemarc joined #gluster
18:12 virusuy__ joined #gluster
18:13 RedW joined #gluster
18:15 jermudgeon joined #gluster
18:17 devilspgd joined #gluster
18:18 anoopcs joined #gluster
18:18 ildefonso joined #gluster
18:22 rwheeler joined #gluster
18:37 calavera joined #gluster
18:44 Gill_ joined #gluster
18:45 ahab joined #gluster
18:47 TheCthulhu1 joined #gluster
18:54 wushudoin| joined #gluster
18:55 aaronott joined #gluster
18:59 wushudoin| joined #gluster
19:01 shaunm_ joined #gluster
19:08 mpietersen joined #gluster
19:13 aaronott joined #gluster
19:22 tborg joined #gluster
19:25 tborg hi all. does anyone have experiences on softlayer nodes ? I'm trying to find out which of their storage might perform consistently reasonable
19:27 aaronott1 joined #gluster
19:27 glusterbot News from newglusterbugs: [Bug 1245331] volume start  command is failing  when  glusterfs compiled  with debug enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1245331>
19:30 rotbeard joined #gluster
19:31 jwd joined #gluster
19:56 rafi joined #gluster
19:56 _maserati joined #gluster
19:58 _dist joined #gluster
20:07 calavera joined #gluster
20:09 shaunm_ joined #gluster
20:10 DV joined #gluster
20:22 deniszh joined #gluster
20:29 jrm16020 joined #gluster
20:42 _maserati_ joined #gluster
20:46 an joined #gluster
20:47 _maserati joined #gluster
20:52 badone joined #gluster
21:12 dgandhi joined #gluster
21:14 nsoffer joined #gluster
21:34 cleong joined #gluster
21:57 calavera joined #gluster
22:00 spcmastertim joined #gluster
22:04 aaronott joined #gluster
22:09 mpietersen joined #gluster
22:10 jbrooks joined #gluster
22:14 afics joined #gluster
22:34 mkzero joined #gluster
23:18 dgandhi1 joined #gluster
23:26 B21956 joined #gluster
23:31 B21956 joined #gluster
23:39 plarsen joined #gluster
23:43 maveric_amitc_ joined #gluster
23:49 indigoblu joined #gluster
23:53 topshare joined #gluster
23:54 shaunm_ joined #gluster
23:56 gildub joined #gluster
23:58 glusterbot News from newglusterbugs: [Bug 1245380] [RFE] Render all mounts of a volume defunct upon access revocation <https://bugzilla.redhat.com/show_bug.cgi?id=1245380>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary