Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 glustercjb left #gluster
00:08 lkoranda joined #gluster
00:14 coredump joined #gluster
00:15 elico xfs is not supported by ubuntu or debian as far as I remember but i'm not sure about debian.
00:16 gdubreui joined #gluster
00:18 seapasulli joined #gluster
00:19 coredump joined #gluster
00:19 kam270 joined #gluster
00:21 chirino joined #gluster
00:27 kam270 joined #gluster
00:28 sprachgenerator joined #gluster
00:28 vpshastry1 joined #gluster
00:35 kam270 joined #gluster
00:37 harish_ joined #gluster
00:44 qdk joined #gluster
00:47 theron joined #gluster
00:52 rahulcs joined #gluster
00:54 badone joined #gluster
00:58 Isaacabo when you say supported what do you mean? because i have 4 debian vm with a 2x2 replicated-distributed lab and works fine
00:58 Isaacabo on xfs
01:08 nightwalk joined #gluster
01:13 harish_ joined #gluster
01:18 raghug joined #gluster
01:21 bala joined #gluster
01:40 vpshastry1 joined #gluster
01:42 tokik joined #gluster
01:44 _dist joined #gluster
01:45 tdasilva left #gluster
01:47 jag3773 joined #gluster
02:30 delhage joined #gluster
02:45 dusmant joined #gluster
02:46 bharata-rao joined #gluster
02:57 mattapperson joined #gluster
03:08 tokik joined #gluster
03:15 tokik joined #gluster
03:21 bala joined #gluster
03:24 raghug joined #gluster
03:27 toki joined #gluster
03:29 shubhendu joined #gluster
03:43 RameshN joined #gluster
03:45 aravindavk joined #gluster
03:46 itisravi joined #gluster
03:52 kanagaraj joined #gluster
03:52 itisravi joined #gluster
03:57 vpshastry1 joined #gluster
04:07 spandit joined #gluster
04:09 gdubreui joined #gluster
04:13 hagarth joined #gluster
04:16 ndarshan joined #gluster
04:29 mattapperson joined #gluster
04:44 raghug joined #gluster
04:47 ppai joined #gluster
04:52 mohankumar joined #gluster
04:52 atinm joined #gluster
04:53 nightwalk joined #gluster
04:55 chirino joined #gluster
04:55 kdhananjay joined #gluster
04:57 sputnik1_ joined #gluster
04:59 nishanth joined #gluster
05:01 vpshastry1 joined #gluster
05:14 sputnik1_ joined #gluster
05:16 prasanth_ joined #gluster
05:17 sahina joined #gluster
05:19 deepakcs joined #gluster
05:21 gdubreui joined #gluster
05:22 vpshastry1 mohankumar: ping
05:30 ricky-ti1 joined #gluster
05:35 mohankumar vpshastry1: pong
05:38 hagarth joined #gluster
05:40 ravindran1 joined #gluster
05:50 aravindavk joined #gluster
05:51 prasanth_ joined #gluster
05:52 prasanth_ joined #gluster
05:52 edward1 joined #gluster
05:52 prasanth_ joined #gluster
05:52 lalatenduM joined #gluster
05:53 prasanth_ joined #gluster
05:53 prasanth_ joined #gluster
06:05 ravindran1 joined #gluster
06:09 benjamin_____ joined #gluster
06:12 hchiramm_ joined #gluster
06:14 lalatenduM purpleidea, ping
06:15 sputnik1_ joined #gluster
06:29 bala joined #gluster
06:29 vimal joined #gluster
06:30 psharma joined #gluster
06:32 mattapperson joined #gluster
06:44 hchiramm_ joined #gluster
06:51 glusterbot New news from newglusterbugs: [Bug 1081870] Updating glusterfs packages reads 'rpmsave' files created by previous updates, and saves the files as .rpmsave.rpmsave. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1081870>
06:52 hagarth joined #gluster
06:59 purpleidea lalatenduM: hey
07:00 purpleidea pong
07:08 ngoswami joined #gluster
07:15 sputnik1_ joined #gluster
07:15 Philambdo joined #gluster
07:18 ekuric joined #gluster
07:21 glusterbot New news from newglusterbugs: [Bug 921215] Cannot create volumes with a . in the name <https://bugzilla.redhat.com/show_bug.cgi?id=921215>
07:22 vpshastry2 joined #gluster
07:25 jtux joined #gluster
07:26 rgustafs joined #gluster
07:29 lalatenduM purpleidea, u still thr ? sorry I was away
07:31 glusterbot New news from resolvedglusterbugs: [Bug 847624] [FEAT] Support for duplicate reply cache <https://bugzilla.redhat.com/show_bug.cgi?id=847624>
07:32 ricky-ticky1 joined #gluster
07:38 purpleidea lalatenduM: hey
07:38 purpleidea it's okay, i am here now
07:38 purpleidea but i might go to bed shortly
07:38 lalatenduM purpleidea, I got your msg
07:39 purpleidea cool
07:39 lalatenduM purpleidea, we will include puppet rpms definitely for storage sig
07:39 lalatenduM purpleidea, but we have n't figured out the process in CentOS build systems and community till now
07:40 purpleidea sounds good to me! have a quick look at them, since it was my first time making rpm's for real.
07:40 purpleidea comments appreciated
07:40 lalatenduM purpleidea, once we figure out that, will add you and we can take it ahead
07:41 lalatenduM purpleidea, yup, I am actually installed CentOS6.5 and trying all RPMs related to gluster eco systems, so will try it
07:43 purpleidea lalatenduM: great! sorry that they're all in separate folders as opposed to being in one big repo... i figured i'd wait so that they go into someone else
07:43 purpleidea repo perhaps.
07:43 purpleidea (also) it should be easy to build your own with 'make rpm' from my upstream source tree
07:47 lalatenduM purpleidea, yeah, thats fine, should be fine
07:48 lalatenduM purpleidea, have read ur blog about the rpms, will get back to you if I find any issue
07:49 purpleidea lalatenduM: sounds good :)
07:49 lalatenduM purpleidea, good night, its lunch time for me :)
07:49 purpleidea hehe it's almost 4am here :) enjoy your lunch, i'll be sleeping!
07:54 ctria joined #gluster
07:59 eseyman joined #gluster
08:12 FarbrorLeon joined #gluster
08:14 davinder2 joined #gluster
08:26 haomaiwang joined #gluster
08:28 gdubreui joined #gluster
08:38 Pavid7 joined #gluster
08:40 andreask joined #gluster
08:56 chirino joined #gluster
08:58 psharma joined #gluster
09:02 lalatenduM joined #gluster
09:14 sputnik1_ joined #gluster
09:33 mohankumar joined #gluster
09:40 jbustos joined #gluster
09:48 Slash joined #gluster
09:52 harish_ joined #gluster
09:58 chirino joined #gluster
10:07 dusmant joined #gluster
10:07 nightwalk joined #gluster
10:09 jmarley joined #gluster
10:12 qdk joined #gluster
10:13 Frankl joined #gluster
10:15 Frankl hi
10:15 glusterbot Frankl: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:17 Frankl I have a question on: when I using client of gluster3.3 to mount a gluster3.4.2 volume, I got a mount failed error for "0-glusterd: Client x.x.x.x:709 (1 -> 1) doesn't support required op-version (2). Rejecting volfile request."
10:18 Frankl It is not permitted to use a gluster3.3 client to mount a 3.4 volume?
10:19 Norky I thought it was, I'm certain you can do it the other way around
10:20 Bardack an
10:35 Frankl Hi, all, I also saw such warnings: [2014-03-28 09:58:43.669988] W [xlator.c:137:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.4.2​/xlator/configuration.so: cannot open shared object file: No such file or directory
10:36 crazifyngers joined #gluster
10:41 liquidat joined #gluster
10:47 harish_ joined #gluster
10:48 SteveCooling joined #gluster
11:07 lpabon joined #gluster
11:09 rahulcs joined #gluster
11:09 dusmant joined #gluster
11:13 Alpinist joined #gluster
11:13 atinm Guys, have anyone came across a situation of chksum mismatch while readding the peer?
11:14 tryggvil joined #gluster
11:15 atinm i detached a peer and when I tried to re-add it it succeeds but the status shows peer rejected and log file says there is a mismatch of chksum in the volumes
11:18 Copez joined #gluster
11:19 vimal joined #gluster
11:20 Copez Strange question about performance
11:20 lalatenduM joined #gluster
11:22 Copez I've replaced my HDD's with SSD
11:22 Copez 3x Node / 8x SSD per Node
11:22 Copez Speed doesn't increase :s
11:23 andreask joined #gluster
11:29 hagarth joined #gluster
11:42 qdk joined #gluster
11:44 diegows joined #gluster
11:46 SteveCooling joined #gluster
12:12 ctria joined #gluster
12:13 Norky suggesting speed is not limited by the disks
12:13 Norky what speed are you seeing?
12:14 edong23 joined #gluster
12:20 T0aD joined #gluster
12:22 dusmant joined #gluster
12:29 Pavid7 joined #gluster
12:31 chirino joined #gluster
12:32 itisravi joined #gluster
12:33 flrichar joined #gluster
12:41 tdasilva joined #gluster
12:41 liquidat joined #gluster
12:46 haomaiwang joined #gluster
12:47 bala joined #gluster
12:50 hagarth joined #gluster
12:53 bms I deleted 30TB of files from a brick in a distributed volume in gluster 3.3. df still shows space used.  du shows file data to be in .glusterfs. If I delete .glusterfs, will gluster rebuild it ok?
12:54 kkeithley you deleted files from the underlying brick, not the mounted volume?
12:55 bms correct. Thought I would "save some time" 3 days ago
12:55 kkeithley never (never, never, never) touch the underlying brick.
12:55 bms gotcha
13:00 kkeithley I suppose you could do a find on .../.glusterfs/xx/xx/* and see which files only have 1 (one) link and delete those. Be very careful with that.
13:01 kkeithley I don't recommend just deleting .glusterfs
13:02 kkeithley When JoeJulian gets on in a little while he or semiosis or others might have advice too.
13:05 sroy_ joined #gluster
13:07 bms thanks
13:12 _dist joined #gluster
13:16 hagarth joined #gluster
13:21 primechuck joined #gluster
13:26 benjamin_____ joined #gluster
13:29 seapasulli joined #gluster
13:31 dbruhn joined #gluster
13:34 B21956 joined #gluster
13:48 Frankl joined #gluster
13:54 bennyturns joined #gluster
13:58 jbustos_ joined #gluster
14:02 rwheeler joined #gluster
14:09 mattappe_ joined #gluster
14:11 itisravi joined #gluster
14:14 cfeller joined #gluster
14:15 failshell joined #gluster
14:17 wushudoin joined #gluster
14:22 cfeller joined #gluster
14:30 cfeller joined #gluster
14:31 atrius joined #gluster
14:32 rpowell joined #gluster
14:32 rpowell left #gluster
14:32 bms So to find files below .gluster where files had been deleted in brick I ran  find . -exec stat -c "%3h %11s %N" {} \; | egrep -v '^  2 ' | grep -v ' -> '
14:32 bms this returned rows like this:  1   937209856 `./d5/88/d5889e92-dde8-4f4c-a2ef-6708e3b57957'
14:33 bms I believe that if I remove these my missing storage will be returned. Has anyone done this?
14:37 tryggvil joined #gluster
14:51 gmcwhistler joined #gluster
15:07 lmickh joined #gluster
15:08 benjamin_____ joined #gluster
15:09 cfeller joined #gluster
15:10 jag3773 joined #gluster
15:17 nightwalk joined #gluster
15:17 LoudNoises joined #gluster
15:21 andreask joined #gluster
15:23 harish_ joined #gluster
15:24 sputnik1_ joined #gluster
15:26 bala joined #gluster
15:27 Isaacabo joined #gluster
15:27 bazzles joined #gluster
15:31 sputnik1_ joined #gluster
15:36 pdrakewe_ joined #gluster
15:40 sputnik1_ joined #gluster
15:45 ryand joined #gluster
15:46 ryand anyone around that has experience with very large deployments? I'm trying to find some information on best practices
15:46 sputnik1_ joined #gluster
15:46 ryand can't seem to google much
15:46 ryand would love to be pointed in the direction of some docs
15:50 daMaestro joined #gluster
15:57 ryand well it sure is dead around here ;)
15:59 dbruhn ryand, what do you need to know
15:59 dbruhn very large is a subjective phrase
16:00 ryand starting at 500tb
16:01 dbruhn raw or 500TB of data?
16:01 jobewan joined #gluster
16:01 ryand data
16:01 ryand replica 2
16:02 dbruhn There are a couple guys in the 90-hundreds of TB that hang out in here.
16:02 dbruhn Where do you want to start?
16:02 gmcwhistler joined #gluster
16:03 ryand just trying to get a feel for brick layout and best practices
16:03 ryand would prefer not to run any raid
16:03 dbruhn You planning on a distributed and replicated system?
16:03 ryand replicated
16:03 ryand need to maintain 2 copies of each file
16:03 dbruhn most guys prefer raid over refilling bricks, myself included. It's a preference though
16:03 dbruhn performance requirements?
16:04 sputnik1_ joined #gluster
16:04 ryand mostly archival
16:04 ryand so not heavy read requirements
16:05 dbruhn yeah, but you still need to write to it, and when you do need to read from it, how fast do you expect to get data off of it?
16:05 ryand well assuming I used nodes with 36 x sata per node, and had 2 replicas
16:05 ryand theoretically my read speeds would be liminted to that single spindle
16:05 ryand if I was 1 brick per physical disk
16:05 ryand correct?
16:06 dbruhn yep
16:06 dbruhn most guys have a hard time wrapping their head around that
16:06 ryand ;)
16:06 dbruhn Honestly, I prefer raid, under it, rebuilding bricks can be super slow
16:06 ryand it's a video archival system
16:07 ryand so if there is a read, it's one big stream
16:07 ryand so looking at it from the other angle
16:07 dbruhn Are your files going to be able to fit on a single spindle easily?
16:08 ryand yeah, with a single brick being a 4tb sata disk
16:08 dbruhn file sizes?
16:08 ryand that would imit file size to 4tb
16:08 ryand < 100gb
16:08 dbruhn max file size?
16:08 ryand < 100gb
16:08 dbruhn sorry read the ><> backwards
16:09 dbruhn lol
16:09 ryand for performance reasons, overall it's best practice to run some level of raid then
16:09 ryand as say a r-6 rebuild is quicker than a brick rebuild?
16:09 dbruhn Well convenience really
16:09 ryand well 10 servers, each with 36 bricks
16:10 ryand gets a bit comples configuraiton wise
16:10 ryand ;)
16:10 dbruhn an r-6 rebuild can happen in the background, a brick if full will be 4th of data that needs to be copied into the brick
16:10 ryand yeah
16:10 dbruhn and to be honest a self heal and rebalance ops are not fast
16:10 ryand in terms of read performance the r6 would crush single bricks
16:10 LoudNoises plus a nice raid card will give you patrol reads for free
16:11 ryand yeah
16:11 dbruhn Not that they have to be huge raid groups either, smaller pools have less of an impact.
16:12 LoudNoises yea size really depends on your performance requirements
16:12 ryand well the resulting filesystem would be one big one
16:12 ryand so unless there is some pressing reason to create smaller disk groups and present as individual bricks
16:12 LoudNoises yea you get that from the distributed part of the replicated + distributed setup
16:12 dbruhn yeah, but if you ever need to work on the bricks under the system, smaller pools under gluster become more manageable
16:13 Mo_ joined #gluster
16:13 ryand so say I had one large 136TB brick on a single node
16:13 ryand and that node needs some sort of physical maint and is taken offline
16:14 ryand that is going to be more impactful than that node having say 4 x 28TB bricks
16:14 ryand each being a 9 disk r6 group
16:14 LoudNoises yea, for instance we have 60 drive jbods attached to each server and split those up into 4-6 bricks per
16:14 dbruhn Yep
16:15 LoudNoises just make sure when you do that you setup replication between servers and both replicated bricks don't end up on the same server :)
16:15 ryand haha yeah
16:15 ryand so on those say 10 disks in a brick
16:15 ryand how are you combining those into a brick, raid level? lvm?
16:15 LoudNoises we use raid 6
16:16 LoudNoises with a couple of hotspares
16:16 dbruhn Real world example I just went through, I have 12 drives per server, in 2 raid groups. I ended up with a corrupt file system under one. I only had to self heal 2.2TB of data back after formatting the file system, instead of 4.4. I averaged about 250GB of data a day on the self heal performance.
16:16 dbruhn with 40GB infiniband, and 15k sas drives
16:16 ryand ....
16:16 ryand 250gb/day
16:17 ryand ouch
16:17 ryand so low
16:17 failshell joined #gluster
16:17 stupidnic joined #gluster
16:17 dbruhn you got it, it's a low priority process and my system is still getting hit super hard
16:17 dbruhn the data being read for me is getting double to reads off it replication for the normal I/O and now having to read and write to the replication target
16:17 ryand but still that seems like a horrible recovery time
16:18 dbruhn so I ended up tripling the work load
16:18 dbruhn exactly, hence limiting the impact
16:18 ryand well for us we would be looking at 7-10 host nodes to start with
16:18 dbruhn one thing to keep in mind, is my system is I/O intensive, so everything slows down when stuff like this happens
16:18 dbruhn ryand, use server and client, makes it easier for everyone around here.
16:19 ryand sure
16:19 dbruhn If you hang out node gets confusing after a while
16:19 ryand servers :)
16:19 dbruhn some guys run their servers as clients, and some guys have them broken out.
16:19 ryand would be purely servers
16:19 dbruhn what kind of sustained write speeds do you need?
16:19 stupidnic 20gb bonded
16:20 * stupidnic waves at ryand
16:20 * ryand slaps stupidnic around a bit with a large trout
16:20 harish_ joined #gluster
16:20 ryand and there we go
16:20 stupidnic speeds is up in the air right now
16:20 ryand the oligatory trout
16:20 dbruhn well if you are generating 100GB video files, how many do you need to write a day?
16:21 stupidnic My understanding is that they will be mainly archival, but obviously we need to get a bit more information from the client
16:21 ryand clients providing insufficient details?
16:21 ryand never!
16:21 dbruhn lol
16:21 stupidnic I know, who would have guessed
16:21 dbruhn They are my favorite!
16:21 ryand it's time to overspec boys
16:21 ryand lol
16:22 dbruhn 56GB IPoverIB, and 960GB SSD drives ;)
16:22 stupidnic I joined a little late dbruhn, what were you saying about IO and self healing?
16:22 vpshastry1 joined #gluster
16:22 dbruhn I was
16:23 ryand that it's slow and io intensive
16:23 dbruhn I was saying I tend to like to break my bricks down to a manageable size, bit of a balancing act. All while utilizing HW raid
16:24 stupidnic Am I correct in my understanding that with the distributed acecss pattern, the max file size would be limited to the brick size?
16:24 ryand yeah you are
16:24 dbruhn well in my case it's slow, but my storage is purpose built for storage, so when I have to self heal an entire brick it's replication partner takes on three times the workload
16:24 ryand we covered that
16:24 stupidnic k
16:24 dbruhn using SATA I would over provision your drives by %20, not that you probably need to be told that
16:25 dbruhn Also, performance, a single stream on large file reads/writes is always going to be bound by the IO performance of the disk subsystem of the bricks the data is stored on
16:27 dbruhn Example, I have built my systems based on an IOP requirement. I need each brick pair to be able to provide 1750IOPS on my SAS system. because I am dealing with smaller files I get to have two bricks divide that load up so each brick in a replication pair are only providing 875 IOPS.
16:28 dbruhn In my case with a down brick now needs to do 2625 IOPS to keep up, hence why my self heals are slow.
16:28 ryand gotcha
16:28 ryand sounds like you need more ssd!
16:28 ryand haha
16:28 dbruhn dude... I have a 96 drive SSD system coming in next week...
16:28 dbruhn lol
16:29 ryand yeah, for another project of ours
16:29 dbruhn and another 16 drive SSD system too
16:29 ryand we have a POC going on with solidfire
16:29 * ryand drooool
16:29 dbruhn I have everything from 3TB SATA in to SSD in Gluster systems anymore.
16:30 dbruhn All of my systems are running over IB anymore though. I retired my 1GB stuff
16:30 dbruhn I have the luxury of having the only clients accessing the system being gluster servers themselves though
16:30 LoudNoises dbruhn: are you seeing over 20GB performance on those IBs?
16:31 dbruhn Honestly, my throughput needs aren't the driver for it. I needed lower latency.
16:31 LoudNoises ahh
16:31 dbruhn So I am running QDR infiniband for everything
16:31 dbruhn the latency differences between 1GB/10GB and IB are huge
16:31 LoudNoises yea the performance whitepaper that gluster had put out a while ago showed relatively little increase for IB over 10Gb
16:31 LoudNoises for throughput
16:32 dbruhn most of the time you are bound by the disk
16:32 LoudNoises we're entirely stream-based here
16:33 dbruhn my system has about 3.6 million files for every 1TB of data stored on it, and I have to go through and do hash consistency checks on the data from an application for data reliability purposes
16:33 dbruhn so lots of back and fourth
16:36 LoudNoises yea ours is 100-300GB files that we do sequential reads and writes on
16:41 zerick joined #gluster
16:47 nage joined #gluster
16:47 nage joined #gluster
16:58 Matthaeus joined #gluster
17:02 hagarth joined #gluster
17:07 vpshastry1 joined #gluster
17:08 vpshastry1 left #gluster
17:12 Pavid7 joined #gluster
17:16 sputnik1_ joined #gluster
17:20 zaitcev joined #gluster
17:35 sputnik1_ joined #gluster
17:43 chirino joined #gluster
17:47 gmcwhist_ joined #gluster
17:59 Isaacabo joined #gluster
18:01 ricky-ticky joined #gluster
18:03 ricky-ticky2 joined #gluster
18:04 diegows joined #gluster
18:08 ndk joined #gluster
18:14 jmarley joined #gluster
18:14 jmarley joined #gluster
18:34 sputnik1_ joined #gluster
18:35 sputnik1_ joined #gluster
18:36 sputnik1_ joined #gluster
18:38 sputnik1_ joined #gluster
18:38 sputnik1_ joined #gluster
18:39 chirino joined #gluster
18:43 jclift__ joined #gluster
18:43 jclift__ left #gluster
18:54 nightwalk joined #gluster
18:54 sputnik1_ joined #gluster
18:54 Pavid7 joined #gluster
18:55 sprachgenerator joined #gluster
18:58 seapasulli joined #gluster
19:05 nightwalk joined #gluster
19:06 rpowell1 joined #gluster
19:07 rpowell2 joined #gluster
19:09 guillaum2 joined #gluster
19:10 seapasulli joined #gluster
19:11 rpowell joined #gluster
19:14 rpowell1 joined #gluster
19:16 lalatenduM joined #gluster
19:19 rpowell joined #gluster
19:22 vpshastry joined #gluster
19:25 vpshastry left #gluster
19:26 dbruhn gahh
19:26 dbruhn JoeJulian, you around?
19:27 dbruhn or maybe anyone else that has fought with a stalled out self heal repopulating a brick
19:31 JoeJulian dbruhn: just now... If I'm not satisfied with waiting for the self-heal daemon, I usually will just create a new mount of the volume on my desktop and do a find on that mount.
19:32 dbruhn JoeJulian, I have a "find . |xargs stat" running on the mount point
19:32 dbruhn still am not seeing any change in the bricks stats
19:33 dbruhn granted this volume is 28TB of data.... so it will take forever, but I thought I would get a hit eventually
19:33 JoeJulian Errors in the client mount? Background self-heals started? Failed? That kind of stuff.
19:34 dbruhn some complaints about self heal fails on a few directories from the clients logs, been working on those as I see them.
19:41 rpowell1 joined #gluster
19:45 rpowell joined #gluster
20:03 sputnik1_ joined #gluster
20:07 zikos joined #gluster
20:11 sputnik1_ joined #gluster
20:17 sputnik1_ joined #gluster
20:28 seapasulli_ joined #gluster
20:31 jmarley joined #gluster
20:44 vpshastry joined #gluster
20:51 sputnik1_ joined #gluster
20:52 Pavid7 joined #gluster
20:53 gmcwhistler joined #gluster
20:55 andreask joined #gluster
20:56 rpowell1 joined #gluster
20:56 gmcwhist_ joined #gluster
20:56 rpowell2 joined #gluster
20:58 sputnik1_ joined #gluster
20:59 doekia joined #gluster
21:00 rpowell2 left #gluster
21:01 rpowell joined #gluster
21:03 rpowell1 joined #gluster
21:03 sputnik1_ joined #gluster
21:06 seapasulli joined #gluster
21:08 sputnik1_ joined #gluster
21:11 seapasulli joined #gluster
21:13 Isaacabo joined #gluster
21:24 jag3773 joined #gluster
21:38 zikos live streaming now : http://www.leglu.com/
21:40 chirino joined #gluster
21:41 rpowell joined #gluster
21:45 wushudoin joined #gluster
21:46 gmcwhistler joined #gluster
21:48 sputnik1_ joined #gluster
21:49 andreask joined #gluster
22:05 sputnik1_ joined #gluster
22:07 tryggvil joined #gluster
22:13 dbruhn aghh
22:13 dbruhn So I am having some weird issues with Infiniband on one of my servers, anyone have any experience.
22:13 dbruhn That might be around
22:13 sputnik1_ joined #gluster
22:14 dbruhn it feels like a gluster software issue but I can't confirm as of yet... logs aren't providing a lot of detail
22:43 chirino joined #gluster
22:50 sroy joined #gluster
22:51 sputnik1_ joined #gluster
22:53 Frankl joined #gluster
23:11 sputnik1_ joined #gluster
23:23 kam270 joined #gluster
23:31 kam270 joined #gluster
23:33 doekia joined #gluster
23:36 andreask joined #gluster
23:37 andreask joined #gluster
23:40 sputnik1_ joined #gluster
23:41 sputnik1_ joined #gluster
23:41 kam270 joined #gluster
23:45 tryggvil joined #gluster
23:46 primechuck joined #gluster
23:49 badone joined #gluster
23:50 kam270 joined #gluster
23:52 sputnik1_ joined #gluster
23:59 sputnik1_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary