Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 plarsen joined #gluster
00:13 shyam joined #gluster
00:28 social_ joined #gluster
00:45 caitnop joined #gluster
00:49 shdeng joined #gluster
01:17 ankit__ joined #gluster
01:29 lalatenduM joined #gluster
01:29 phileas joined #gluster
01:29 uebera|| joined #gluster
01:54 farhorizon joined #gluster
02:00 Gambit15 joined #gluster
02:04 susant joined #gluster
02:05 B21956 joined #gluster
02:06 nathwill joined #gluster
02:11 Caveat4U joined #gluster
02:23 Wizek_ joined #gluster
02:32 flomko joined #gluster
02:37 nathwill joined #gluster
02:41 riyas joined #gluster
02:41 derjohn_mobi joined #gluster
02:49 ilbot3 joined #gluster
02:49 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:00 jocke- joined #gluster
03:03 farhorizon joined #gluster
03:09 sanoj joined #gluster
03:25 geniusoflime joined #gluster
03:27 geniusoflime We have nfs-ganesha sitting on top of a gluster cluster. Our max write speed for NFSv3 is 100MB/s Read speed 5-700MB/s. How can we improve write speed over nfs?
03:31 poornima joined #gluster
03:36 magrawal joined #gluster
03:37 B21956 joined #gluster
03:47 riyas_ joined #gluster
03:47 RameshN joined #gluster
04:02 kramdoss_ joined #gluster
04:04 Shu6h3ndu joined #gluster
04:09 buvanesh_kumar joined #gluster
04:10 itisravi joined #gluster
04:30 msvbhat joined #gluster
04:34 rjoseph|afk joined #gluster
04:45 msvbhat joined #gluster
04:51 nbalacha joined #gluster
04:51 Debloper joined #gluster
04:53 msvbhat joined #gluster
04:53 Saravanakmr joined #gluster
05:00 skoduri joined #gluster
05:02 nbalacha joined #gluster
05:03 ppai joined #gluster
05:09 Prasad joined #gluster
05:13 sanoj joined #gluster
05:14 karthik_us joined #gluster
05:16 ndarshan joined #gluster
05:17 Prasad joined #gluster
05:21 nbalacha joined #gluster
05:23 Prasad_ joined #gluster
05:24 nishanth joined #gluster
05:26 Prasad__ joined #gluster
05:28 Lee1092 joined #gluster
05:29 Prasad joined #gluster
05:30 k4n0 joined #gluster
05:32 Prasad_ joined #gluster
05:40 msvbhat joined #gluster
05:40 aravindavk joined #gluster
05:40 nbalacha joined #gluster
05:46 riyas joined #gluster
05:51 k4n0 joined #gluster
05:53 apandey joined #gluster
05:54 susant joined #gluster
05:55 jiffin joined #gluster
06:02 Karan joined #gluster
06:10 kramdoss_ joined #gluster
06:11 hgowtham joined #gluster
06:12 ankit_ joined #gluster
06:16 nbalacha joined #gluster
06:17 ashiq joined #gluster
06:20 rafi joined #gluster
06:20 prasanth joined #gluster
06:22 mhulsman joined #gluster
06:23 mhulsman joined #gluster
06:25 susant joined #gluster
06:25 k4n0 joined #gluster
06:25 gyadav joined #gluster
06:25 jkroon joined #gluster
06:35 mb_ joined #gluster
06:38 nbalacha joined #gluster
06:39 rastar joined #gluster
06:46 msvbhat joined #gluster
06:49 kotreshhr joined #gluster
06:56 msvbhat joined #gluster
06:59 nbalacha joined #gluster
07:08 k4n0 joined #gluster
07:18 msvbhat joined #gluster
07:22 nbalacha joined #gluster
07:24 bluenemo joined #gluster
07:29 jtux joined #gluster
07:34 ashiq joined #gluster
07:41 saintpablo joined #gluster
07:41 nbalacha joined #gluster
07:44 Bardack joined #gluster
07:51 sbulage joined #gluster
07:52 jri joined #gluster
07:57 Prasad joined #gluster
08:00 ivan_rossi joined #gluster
08:01 nbalacha joined #gluster
08:02 rastar joined #gluster
08:02 ashiq joined #gluster
08:03 [diablo] joined #gluster
08:18 nbalacha joined #gluster
08:18 k4n0 joined #gluster
08:20 fsimonce joined #gluster
08:29 musa22 joined #gluster
08:30 musa22_ joined #gluster
08:32 Caveat4U joined #gluster
08:33 mhulsman joined #gluster
08:42 mhulsman1 joined #gluster
08:43 nbalacha joined #gluster
08:44 rafi1 joined #gluster
08:44 jtux joined #gluster
08:45 saintpablos joined #gluster
08:47 susant left #gluster
08:49 flying joined #gluster
08:54 alezzandro joined #gluster
08:55 musa22 joined #gluster
08:57 susant joined #gluster
09:00 newdave joined #gluster
09:00 sbulage joined #gluster
09:04 jiffin1 joined #gluster
09:05 kotreshhr joined #gluster
09:06 itisravi joined #gluster
09:08 nbalacha joined #gluster
09:16 ashiq joined #gluster
09:18 k4n0 joined #gluster
09:19 karthik_us joined #gluster
09:23 mbukatov joined #gluster
09:24 Slashman joined #gluster
09:25 rastar joined #gluster
09:25 m0zes joined #gluster
09:27 [diablo] joined #gluster
09:29 nbalacha joined #gluster
09:33 rafi1 joined #gluster
09:36 jiffin1 joined #gluster
09:40 musa22 joined #gluster
09:44 k4n0 joined #gluster
09:47 nbalacha joined #gluster
09:49 karthik_us joined #gluster
09:50 kotreshhr joined #gluster
09:50 derjohn_mobi joined #gluster
09:53 fsimonce joined #gluster
09:55 musa22 joined #gluster
10:02 skoduri joined #gluster
10:06 jkroon joined #gluster
10:08 nbalacha joined #gluster
10:16 pulli joined #gluster
10:19 k4n0 joined #gluster
10:27 nbalacha joined #gluster
10:27 hybrid512 joined #gluster
10:33 Gambit15 joined #gluster
10:37 k4n0 joined #gluster
10:44 unlaudable joined #gluster
10:45 nbalacha joined #gluster
10:51 panina joined #gluster
10:58 musa22 joined #gluster
11:03 mhulsman joined #gluster
11:04 nbalacha joined #gluster
11:10 musa22 joined #gluster
11:11 mhulsman1 joined #gluster
11:16 suliba joined #gluster
11:18 rastar joined #gluster
11:19 ashiq joined #gluster
11:24 k4n0 joined #gluster
11:33 mhulsman joined #gluster
11:35 shyam joined #gluster
11:36 Humble joined #gluster
11:37 mhulsman1 joined #gluster
11:58 msvbhat joined #gluster
11:58 musa22 joined #gluster
12:01 jdarcy joined #gluster
12:01 om2 joined #gluster
12:06 kotreshhr left #gluster
12:14 k4n0 joined #gluster
12:29 flying joined #gluster
12:33 susant left #gluster
12:41 k4n0 joined #gluster
12:45 ashiq joined #gluster
12:47 kettlewell joined #gluster
12:49 Jacob843 joined #gluster
12:52 sankarshan joined #gluster
12:53 sankarshan joined #gluster
12:56 rafi1 joined #gluster
12:57 musa22 joined #gluster
13:03 social joined #gluster
13:04 mhulsman joined #gluster
13:05 plarsen joined #gluster
13:08 B21956 joined #gluster
13:16 karthik_us joined #gluster
13:18 apandey joined #gluster
13:44 nbalacha joined #gluster
13:46 k4n0 joined #gluster
13:59 mhulsman joined #gluster
14:00 mhulsman1 joined #gluster
14:06 Humble joined #gluster
14:07 ashiq joined #gluster
14:17 nbalacha joined #gluster
14:19 DV__ joined #gluster
14:27 saintpabloss joined #gluster
14:32 skylar joined #gluster
14:32 ira joined #gluster
14:32 kshlm #info Fortnightly Community Meeting starts in 30 minutes in #gluster-meeting. Add your updates and topics to https://bit.ly/gluster-community-meetings
14:32 glusterbot Title: Gluster Community Meeting - HackMD (at bit.ly)
14:36 saintpablos joined #gluster
14:44 nbalacha joined #gluster
14:44 musa22 Hi All, I'm currently using glusterfs 3.7.11 and 2 replica volume. I'm trying to prevent split-brain by adding arbiter volume. Is it possible to add this to an existing 2 replica setup with volume add-brick replica 3 using 3.7.x?
14:46 musa22 I did quick googling and all the search result suggest this is not possible on 3.7.x branch, can someone please confirm.
14:49 farhorizon joined #gluster
14:59 squizzi joined #gluster
15:01 kshlm Community meeting has started in #gluster-meeting
15:03 jiffin1 joined #gluster
15:03 jdarcy joined #gluster
15:08 Caveat4U joined #gluster
15:09 rafi1 joined #gluster
15:17 d4n13L- joined #gluster
15:18 john51 joined #gluster
15:18 l2___ joined #gluster
15:18 alezzandro joined #gluster
15:18 alezzandro joined #gluster
15:18 cvstealt1 joined #gluster
15:18 wiza joined #gluster
15:18 Gambit15_ joined #gluster
15:18 pasik_ joined #gluster
15:18 ashka joined #gluster
15:18 ashka joined #gluster
15:19 sysanthrope joined #gluster
15:19 pocketprotector joined #gluster
15:19 ibotty joined #gluster
15:19 tg2 joined #gluster
15:20 fus joined #gluster
15:20 cholcombe joined #gluster
15:20 kotreshhr joined #gluster
15:20 rideh joined #gluster
15:21 kenansulayman joined #gluster
15:21 coredumb joined #gluster
15:22 devyani7 joined #gluster
15:22 eryc joined #gluster
15:22 glusterbot joined #gluster
15:22 DV__ joined #gluster
15:22 morse_ joined #gluster
15:22 Utoxin joined #gluster
15:22 sac joined #gluster
15:22 bhakti joined #gluster
15:22 ItsMe` joined #gluster
15:22 rossdm joined #gluster
15:22 al joined #gluster
15:22 yosafbridge joined #gluster
15:23 siel joined #gluster
15:23 slick20 anyone know where i could find : samba-vfs-glusterfs-4.4.4-12.el7.x86_64.rpm ?
15:23 yawkat joined #gluster
15:25 slick20 i need to match my samba version.. (4.4.4-12)
15:25 Caveat4U joined #gluster
15:26 riyas joined #gluster
15:27 susant joined #gluster
15:27 Caveat4U https://www.rpmfind.net/linux/RPM/cen​tos/7.3.1611/x86_64/Packages/samba-vf​s-glusterfs-4.4.4-9.el7.x86_64.html is the closest I can find, which matches the version but not the release number, not toally sure if that difference would set you
15:27 glusterbot Title: samba-vfs-glusterfs-4.4.4-9.el7.x86_64 RPM (at www.rpmfind.net)
15:28 Caveat4U The other option would be to downgrade sambe to -9
15:28 Gambit15 joined #gluster
15:28 kotreshhr joined #gluster
15:28 RustyB joined #gluster
15:28 Caveat4U Or...get freaky and compile yourself @slick20
15:29 billputer joined #gluster
15:29 fyxim joined #gluster
15:33 ankit_ joined #gluster
15:41 Caveat4U joined #gluster
15:41 newdave joined #gluster
15:46 musa22 Hi All, I'm currently using glusterfs 3.7.11 and 2 replica volume. I'm trying to prevent split-brain by adding arbiter volume. Is it possible to add this to an existing 2 replica setup with volume add-brick replica 3 using 3.7.x?
15:46 musa22 I did quick googling and all the search result suggest this is not possible on 3.7.x branch, can someone please confirm.
16:03 prasanth joined #gluster
16:06 wushudoin joined #gluster
16:07 arpu joined #gluster
16:07 wushudoin joined #gluster
16:16 flyingX joined #gluster
16:16 bob joined #gluster
16:17 Guest67874 Hello everyone
16:17 grepme joined #gluster
16:18 Guest67874 I am searching for how to enable tcp keepalive on gluster NFS server
16:18 Guest67874 is it possible ?
16:19 grepme Anyone available to help with a odd issue I've been trouble shooting?
16:20 Guest67874 I have found transport.socket.keepalive-time and transport.socket.keepalive-interval but I can't figure how to set it for NFS connections
16:21 Slashman joined #gluster
16:22 JoeJulian Guest67874: I don't think there's any configuration setting for that. Perhaps if you turn it on system-wide? http://tldp.org/HOWTO/TCP-Keepa​live-HOWTO/usingkeepalive.html
16:22 glusterbot Title: Using TCP keepalive under Linux (at tldp.org)
16:22 JoeJulian grepme: Standard IRC protocol is, "Don't ask to ask". :)
16:23 Guest67874 TCP keepalive under Linux is not specific to gluster so won't help me
16:24 Guest67874 I need the option "so_keepalive" to be enabled by gluster on nfs server for connections to port 2049
16:24 nathwill joined #gluster
16:24 bbooth joined #gluster
16:25 grepme Ha! true, but I'm trying to make sure I have people's attention
16:25 grepme tldr: ovirt HCI configured gluster volume (dist-replicate) won't come up after reboots.
16:26 grepme three nodes. all the same.
16:26 grepme bond0 -> up, no ip.
16:26 grepme bond0.14 vlan for gluster
16:26 grepme rhel7
16:26 grepme gluster 3.8.8
16:27 grepme volume is tcp, 4 bricks per node,
16:27 grepme Number of Bricks: 4 x 3 = 12
16:27 grepme two gluster volumes, one named hostedEngine, one named vmdata.
16:28 grepme when attempting to start, eventually times out.
16:28 grepme have logs in D on one host, T on the other
16:28 grepme connectivity verified with tcpping, (why did they take away nc -z?)
16:29 grepme 24007 is open
16:29 grepme gluster peer status shows connected
16:29 grepme on all three nodes
16:29 grepme turned off selinux, turned off firewall (for troubleshooting.. need it back on later)
16:29 JoeJulian volume status shows the bricks up?
16:30 grepme gluster volume status returns "Error : Request timed out"
16:30 Caveat4U joined #gluster
16:30 grepme gluster volume info shows Status: Started
16:31 JoeJulian Hmm, timing out seems like unresponsive glusterd.
16:31 grepme seeing many 0-management: bailing out frame type(Peer mgmt) type " E " messages.. suggests network layer issue?
16:32 grepme yeah, restarts, kill, reboot,
16:32 grepme same results for me.
16:32 grepme turned of fw, selinux.
16:32 JoeJulian E is error, so yeah it seems likely. Paste the whole error line.
16:32 sage joined #gluster
16:32 grepme from node 1 (10.49.1.129) [2017-01-18 16:07:06.523373] E [rpc-clnt.c:200:call_bail] 0-management: bailing out frame type(Peer mgmt) op(--(2)) xid = 0x4 sent = 2017-01-18 15:56:56.249900. timeout = 600 for 10.49.1.137:24007 [2017-01-18 16:07:06.523418] E [rpc-clnt.c:200:call_bail] 0-management: bailing out frame type(Peer mgmt) op(--(2)) xid = 0x4 sent = 2017-01-18 15:56:56.258315. timeout = 600 for 10.49.1.145:24007
16:32 glusterbot grepme: op('s karma is now -1
16:32 JoeJulian "bailing out frame" isn't a network frame, but a fop frame.
16:32 glusterbot grepme: op('s karma is now -2
16:33 grepme karma killer.
16:33 grepme lowered my karma for pasting a log line . :-(
16:34 JoeJulian This is the rpc call to 10.49.1.137 that hasn't been answered for 5 minutes.
16:34 grepme yep, that's the node "2"
16:34 JoeJulian No, you lowered op('s karma.
16:34 JoeJulian Oh, two lines. They're both timing out.
16:35 grepme yes
16:35 grepme so they say
16:35 JoeJulian That does seem odd.
16:35 grepme especially since gluster peer status shows connected
16:35 JoeJulian Do the other two have similar errors?
16:35 grepme yes.
16:36 grepme and tcppings to 24007 show open to all hosts from all hosts.
16:37 JoeJulian grepme: fpaste a clean glusterd log, from starting the service to those error lines.
16:38 musa22 JoeJulian: I'm currently using 2 node/ 2 replica volume, I would like to add 3rd node arbiter volume to prevent split-brain. Is it possible to add this to an existing 2 replica setup with volume add-brick replica 3 using 3.7.x?
16:39 grepme ok.. Lemme see how to do that. egress from the dc is a bit complicated
16:39 grepme gimmie a minute
16:40 JoeJulian musa22: I'd have to spin up a VM and test it to be sure. If it works, the command should be "gluster volume add-brick $volname replica 3 arbiter 1 $brick_path"
16:40 JoeJulian musa22: if it doesn't work, it will just error.
16:41 musa22 all the google result suggest this is not possible on 3.7.x release.
16:41 musa22 I was hoping someone could confirm this.
16:41 JoeJulian Personally, I would rather try it than spend a half hour scraping the internet. ;)
16:42 JoeJulian Give me 10 minutes and I'll try it for you if you'd rather. I've got to do a couple things here first.
16:42 JoeJulian Plus, I need to grab an older gluster package...
16:44 jdossey joined #gluster
16:44 bbooth joined #gluster
16:45 musa22 JoeJulian: If you can, this will be great. Or if you could guide me how to use gluster vol add-brick. This is my exiting vol configuration: https://paste.fedoraproject.org/529896/57850148/
16:45 glusterbot Title: #529896 • Fedora Project Pastebin (at paste.fedoraproject.org)
16:48 kpease joined #gluster
16:49 kpease_ joined #gluster
16:49 msvbhat joined #gluster
16:50 JoeJulian musa22: you would have to add 3 bricks. The syntax I stated remains the same but "$brick_path" would be "$brick_path1 $brick_path2 $brick_path3"
16:53 bowhunter joined #gluster
16:53 grepme thanks for your help and patience with me on the fpaste, my log in debug is very large and gets rejected. trying to cut it down.
16:55 Karan joined #gluster
16:55 musa22 JoeJulian: Thanks, will try it on my VM.
16:55 grepme https://paste.fedoraproject.org/529905/47584441/
16:55 glusterbot Title: #529905 • Fedora Project Pastebin (at paste.fedoraproject.org)
16:55 grepme There ya go, JOe
16:55 JoeJulian Alrighty then...
16:56 grepme tia.. :-)
16:59 JoeJulian Well... All that work but there's no error. Sorry.
17:00 grepme whuuu
17:00 grepme i guess I chopped way too much trying to get it small enough to fpaste
17:01 * grepme shakes fist at screen
17:01 JoeJulian Just grep -v ' D ' for now. We'll see if we need debug later.
17:01 grepme ok
17:02 grepme good plan
17:04 grepme https://paste.fedoraproject.org/529909/47589871/
17:04 glusterbot Title: #529909 • Fedora Project Pastebin (at paste.fedoraproject.org)
17:15 JoeJulian That's interesting. It looks like it's some sort of peer resolution storm. cc: kkeithley
17:16 grepme huh.
17:16 grepme I did double check the entries in /etc/hosts
17:16 grepme and dns (ipa)
17:16 grepme they match
17:16 grepme I only have fqdn in /etc/hosts
17:17 JoeJulian Please file a bug report. Include the logs, and a state dump from each of the servers.
17:17 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:17 grepme this is one of the first things we did during our trouble shootings and investigation
17:17 grepme statedump times out
17:17 Caveat4U joined #gluster
17:18 alvinstarr joined #gluster
17:18 JoeJulian pkill -USR1 glusterd
17:21 JoeJulian There's an rpc race condition fix between 3.8.7 and 3.8.8. It might be worth trying 3.8.7.
17:21 JoeJulian In case that fix caused a regression.
17:27 musa22 JoeJulian: I've tried add arbiter volume to existing 2 replica volume, i getting error. https://paste.fedoraproject.org/529923/84760346/
17:27 glusterbot Title: #529923 • Fedora Project Pastebin (at paste.fedoraproject.org)
17:27 musa22 JoeJulian: Can i assume this is not supported in 3.7.x?
17:27 JoeJulian musa22: There's your answer. Nope.
17:27 JoeJulian But....
17:28 JoeJulian You can delete and recreate the volume with your arbiter bricks.
17:28 grepme what would be a good desciption of this bug?
17:28 JoeJulian grepme: rpc frame timeouts
17:28 jiffin joined #gluster
17:30 musa22 JoeJulian: I can't do this in production. I think my best option is creating new 3 replica volume with arbiter then migrating data from existing volume?
17:31 JoeJulian musa22: If that's an option, sure. Either way you're going to have some downtime. If it's cheaper to kick people off and do a last minute sync, or to kick people off and recreate the volume is probably use-case specific.
17:36 musa22 JoeJulian: Thanks for your assistance. Any option that prevents split-brain is worth it. keep dealing with split-brain issues every other months.
17:37 JoeJulian Yuck, that would be frustrating.
17:38 Caveat4U joined #gluster
17:38 musa22 lesson learned the hard way :)
17:39 Caveat4U joined #gluster
17:47 Shu6h3ndu joined #gluster
17:47 ivan_rossi left #gluster
17:48 Guest67874 JoeJulian: enabling it system wide does not work
17:48 JoeJulian Bummer
17:48 Guest67874 the application has to enable is anyway
17:48 alvinstarr joined #gluster
17:48 grepme bug submitted.
17:48 kotreshhr left #gluster
17:48 grepme ok
17:50 JoeJulian Guest67874: In the source, there's an undocumented setting, "transport.keepalive".
17:51 JoeJulian Doesn't appear to affect nfs though.
17:51 Guest67874 yes that's what it think
17:51 Guest67874 looks like it's only for the gluster daemon itself
17:53 JoeJulian Guest67874: what about nfs-ganesha? Does it support keepalive?
17:54 timotheus1 joined #gluster
17:54 Guest67874 JoeJulian: is it part of glusterfs ?
17:54 Saravanakmr joined #gluster
17:55 JoeJulian No, but they have a close relationship. https://gluster.readthedocs.io/en/​latest/Administrator%20Guide/NFS-G​anesha%20GlusterFS%20Integration/
17:55 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.io)
17:55 saali joined #gluster
17:56 Guest67874 ok, I'll have a look if I can make it work on debian 8
17:56 riyas joined #gluster
17:56 grepme @joejulian thanks for the time. bug submitted. Now to discuss with tribe about next steps in project, based on output from bug. run an old version? tear down and build anew?
17:57 grepme Thanks again. much love for you and gluster tribe.
17:57 JoeJulian Guest67874: There's a PPA
17:57 Guest67874 ok thanks !
17:57 JoeJulian Guest67874: Also #ganesha
17:58 Guest67874 I'll have a look
18:09 armyriad joined #gluster
18:13 bbooth joined #gluster
18:19 prasanth joined #gluster
18:33 kkeithley nfs-ganesha doesn't do anything with SO_KEEPALIVE, so offhand I'd say no, it doesn't support keep-alive
18:48 MidlandTroy joined #gluster
19:00 gkeane joined #gluster
19:13 jdossey joined #gluster
19:13 gkeane joined #gluster
19:21 cliluw joined #gluster
19:37 saali joined #gluster
19:40 mhulsman joined #gluster
19:51 Caveat4U_ joined #gluster
19:55 snehring where does gluster store snapshot information? I've got a machine that thinks it has a snapshot that no longer exists.
20:04 bowhunter joined #gluster
20:09 Caveat4U joined #gluster
20:10 saali joined #gluster
20:11 bbooth joined #gluster
20:14 Caveat4U_ joined #gluster
20:27 musa22 joined #gluster
20:27 pulli joined #gluster
20:33 grepme Didn't mention it earlier, but the bug id => 1414519
20:38 bbooth joined #gluster
20:52 Acinonyx joined #gluster
20:54 esumerfield joined #gluster
20:55 alezzandro joined #gluster
20:56 esumerfield left #gluster
20:58 JoeJulian aka bug 1414519
20:58 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1414519 high, unspecified, ---, bugs, NEW , Glusterd fails to start: rpc frame timeouts
20:58 squizzi joined #gluster
21:02 Acinonyx_ joined #gluster
21:03 msvbhat joined #gluster
21:05 bbooth joined #gluster
21:06 mhulsman joined #gluster
21:09 Caveat4U joined #gluster
21:12 Acinonyx joined #gluster
21:17 musa22 joined #gluster
21:20 Acinonyx joined #gluster
21:21 Caveat4U joined #gluster
21:27 snehring nvm: reboot of the machine fixed it
21:28 bbooth joined #gluster
21:29 derjohn_mobi joined #gluster
21:35 DK2 joined #gluster
21:35 DK2 any ideas to debug a slow gluster?
21:35 DK2 im hosting a webpage on it its awfully slow
21:43 jbrooks joined #gluster
21:47 Caveat4U joined #gluster
21:49 bbooth joined #gluster
22:01 DK2 is there anyway to install glusterfs 3.8 or 3.9 on centos 6?
22:09 jbrooks joined #gluster
22:17 bbooth joined #gluster
22:22 shyam joined #gluster
22:44 bbooth joined #gluster
22:44 farhorizon joined #gluster
23:08 musa22 joined #gluster
23:24 Acinonyx joined #gluster
23:24 timotheus1 joined #gluster
23:24 RustyB joined #gluster
23:24 siel joined #gluster
23:24 d4n13L joined #gluster
23:24 saintpablos joined #gluster
23:24 lalatenduM joined #gluster
23:24 phileas joined #gluster
23:24 uebera|| joined #gluster
23:27 colm joined #gluster
23:47 malevolent joined #gluster
23:57 wushudoin joined #gluster
23:58 wushudoin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary