Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 xpinex joined #gluster
00:38 ira joined #gluster
01:05 asias joined #gluster
01:11 yinyin joined #gluster
01:20 davidbierce joined #gluster
01:25 Shdwdrgn joined #gluster
01:43 vpshastry joined #gluster
02:12 yinyin joined #gluster
02:17 krishna_ joined #gluster
02:26 krishna__ joined #gluster
02:36 nueces joined #gluster
02:53 mkarg_ joined #gluster
02:53 iRobdog joined #gluster
02:56 sgowda joined #gluster
03:00 rc10 joined #gluster
03:18 tg2 joined #gluster
03:19 bharata-rao joined #gluster
03:19 glusterbot New news from resolvedglusterbugs: [Bug 826032] glusterfsd crashed while performing "kill -HUP" on glusterfsd process in a loop <http://goo.gl/x4QXK> || [Bug 825084] glusterfsd crashed <http://goo.gl/ZP4VM> || [Bug 797735] glusterfsd crash <http://goo.gl/VtO4O>
03:22 shubhendu joined #gluster
03:25 tg2 joined #gluster
03:27 kshlm joined #gluster
03:28 kanagaraj joined #gluster
03:39 glusterbot New news from newglusterbugs: [Bug 1024600] glusterfsd does not release a file. <http://goo.gl/gRikFq>
03:39 itisravi joined #gluster
03:50 shylesh joined #gluster
03:51 psharma joined #gluster
03:54 yinyin joined #gluster
03:56 ajha joined #gluster
04:00 dusmant joined #gluster
04:02 mohankumar joined #gluster
04:05 ajha joined #gluster
04:19 shruti joined #gluster
04:20 rc10 joined #gluster
04:26 RameshN joined #gluster
04:28 raghu joined #gluster
04:29 ababu joined #gluster
04:30 rjoseph joined #gluster
04:36 vpshastry joined #gluster
04:40 ndarshan joined #gluster
04:42 andreask joined #gluster
04:47 meghanam joined #gluster
04:47 meghanam_ joined #gluster
04:56 shireesh joined #gluster
05:00 nshaikh joined #gluster
05:09 Cenbe joined #gluster
05:17 rastar joined #gluster
05:18 rc10 joined #gluster
05:19 glusterbot New news from resolvedglusterbugs: [Bug 1024600] glusterfsd does not release a file. <http://goo.gl/gRikFq>
05:21 psharma joined #gluster
05:26 hateya joined #gluster
05:28 CheRi joined #gluster
05:29 gunthaa__ joined #gluster
05:32 bulde joined #gluster
05:33 ppai joined #gluster
05:39 hagarth joined #gluster
05:39 glusterbot New news from newglusterbugs: [Bug 1021998] nfs mount via symbolic link does not work <http://goo.gl/H3C8W2>
05:43 lalatenduM joined #gluster
05:43 satheesh1 joined #gluster
06:02 kshlm joined #gluster
06:16 ndarshan joined #gluster
06:35 itisravi_ joined #gluster
06:38 itisravi joined #gluster
06:46 vimal joined #gluster
06:51 mohankumar joined #gluster
06:52 kshlm joined #gluster
06:52 psharma joined #gluster
06:54 harish_ joined #gluster
06:57 raghu joined #gluster
07:06 pkoro joined #gluster
07:07 ricky-ticky joined #gluster
07:10 sgowda joined #gluster
07:24 rastar joined #gluster
07:30 jtux joined #gluster
07:30 sgowda joined #gluster
07:37 ngoswami joined #gluster
07:43 psharma joined #gluster
07:56 edain joined #gluster
07:56 ctria joined #gluster
08:02 eseyman joined #gluster
08:02 nueces joined #gluster
08:03 rastar joined #gluster
08:08 gluster-favorite joined #gluster
08:09 dneary joined #gluster
08:13 Skaag joined #gluster
08:14 mgebbe joined #gluster
08:15 gluster-favorite Hi, guys! I have some problem (features/bag). When my client connect to Gluster-share (3.3.2) over NFS, sometimes, when client do more-less production load, my share on client freezes , when i do # df -h  . Does anybody know what to do with this ? Can I tuning NFS connection with Gluster share on server side ?
08:25 harish_ joined #gluster
08:26 keytab joined #gluster
08:31 Dga joined #gluster
08:34 aravindavk joined #gluster
08:34 abyss^ my proces of glusterfs take 68% memory but if I read the top commands and profile command in proper way, gluster do nothing, any ideas? How to check this what happen?
08:34 abyss^ (gluster 3.3.1)
08:37 ricky-ticky joined #gluster
08:41 rastar joined #gluster
08:54 dusmant joined #gluster
08:54 ababu joined #gluster
08:56 abyss^ strace show only epoll_wait, so it's ok, nothing is doing and wait for i/o
08:57 kanagaraj joined #gluster
09:07 gluster-favorite Hi, guys! I have some problem (features/bag). When my client connect to Gluster-share (3.3.2) over NFS, sometimes, when client do more-less production load, my share on client freezes , when i do # df -h  . Does anybody know what to do with this ? Can I tuning NFS connection with Gluster share on server side
09:08 bala joined #gluster
09:27 hagarth gluster-favorite: it might be good to open a thread on gluster-users for this
09:27 RedShift joined #gluster
09:28 sgowda joined #gluster
09:28 RameshN joined #gluster
09:35 gluster-favorite <hagarth> How can i do this? In tuning options i can't find something like this.
09:35 rjoseph joined #gluster
09:37 sgowda joined #gluster
09:40 danci1973 Hi... How can I tell if my volume mount is actually using RDMA ? I used the '-o transport=rdma' and it mounted, but IIRC it was doing the same when my volume wasn't created with 'tcp,rdma' - probably silently ignored lack of RDMA and used TCP?
09:40 glusterbot New news from newglusterbugs: [Bug 1024695] [RFE] Provide RPC throttling stats <http://goo.gl/JbnAsY>
09:44 kanagaraj joined #gluster
09:44 aravindavk joined #gluster
09:50 ababu joined #gluster
09:51 danci1973 I guess if 'iftop' sees TCP traffic on 'ib0' then it's not using RDMA.... :(
09:58 bulde joined #gluster
10:00 ninkotech joined #gluster
10:02 dusmant joined #gluster
10:07 harish_ joined #gluster
10:10 DV__ joined #gluster
10:13 an joined #gluster
10:14 gluster-favorite Hi, guys! I have some problem (features/bag). When my client connect to Gluster-share (3.3.2) over NFS, sometimes, when client do more-less production load, my share on client freezes , when i do # df -h  . Does anybody know what to do with this ? Can I tuning NFS connection with Gluster share on server side
10:14 gluster-favorite ?
10:15 vpshastry1 joined #gluster
10:21 ninkotech joined #gluster
10:21 ninkotech__ joined #gluster
10:22 abyss^ ok I restarted gluster and now I have 0% memory use;) But another question, I have created storage in replicated. Now I turn off the first glusterfs on node1 and then my mount point is umount... Why I thought when I turn off the first glusterfs the mount point would be still mounting because fuse take info about files from second glusterfs...
10:23 abyss^ but not
10:24 abyss^ Someone can put some shine on me? BTW: Sorry for my english I do what I can to make my sentences readable;)
10:28 sgowda joined #gluster
10:31 bharata-rao joined #gluster
10:37 badone_gone joined #gluster
10:38 bala joined #gluster
10:53 rastar joined #gluster
10:56 bala joined #gluster
10:57 gluster-favorite Hi, guys! I have some problem (features/bag). When my client connect to Gluster-share (3.3.2) over NFS, sometimes, when client do more-less production load, my share on client freezes , when i do # df -h  . Does anybody know what to do with this ? Can I tuning NFS connection with Gluster share on server side?
11:04 sgowda joined #gluster
11:08 ninkotech__ joined #gluster
11:08 ninkotech joined #gluster
11:09 harish_ joined #gluster
11:11 glusterbot New news from newglusterbugs: [Bug 1023636] Inconsistent UUID's not causing an error that would stop the system <http://goo.gl/MAHmr8> || [Bug 1023667] The Python libgfapi API needs more fops <http://goo.gl/vxi0Zq>
11:13 rjoseph joined #gluster
11:14 mbukatov joined #gluster
11:15 kkeithley1 joined #gluster
11:22 CheRi joined #gluster
11:25 ppai joined #gluster
11:26 pkoro joined #gluster
11:28 rastar joined #gluster
11:31 manik joined #gluster
11:35 jordi1 joined #gluster
11:42 diegows_ joined #gluster
12:04 CheRi joined #gluster
12:08 rcheleguini joined #gluster
12:10 hagarth joined #gluster
12:10 bulde joined #gluster
12:24 dusmant joined #gluster
12:26 ira joined #gluster
12:39 bennyturns joined #gluster
12:44 edward2 joined #gluster
12:48 geewiz_ joined #gluster
12:51 B21956 joined #gluster
12:54 vpshastry1 joined #gluster
13:00 popovda joined #gluster
13:02 harish_ joined #gluster
13:16 bulde joined #gluster
13:21 dblack joined #gluster
13:21 dewey joined #gluster
13:33 giannello joined #gluster
13:35 aixsyd joined #gluster
13:36 onny joined #gluster
13:39 aixsyd heya gents - i'm planning out a gluster storage map, and I was wondering if anyone could give me some pointers, or help, or point out anything I'm missing or neglecting
13:40 aixsyd http://i.imgur.com/IxzUCAn.png
13:40 RedShift no redundant storage switch?
13:40 aixsyd due to cost, probably not.
13:41 RedShift I would consider the inter storage nodes connections to be the most important
13:41 aixsyd the infiniband connections?
13:41 RedShift if those fail you get lengthy recovery times because all data needs to be verified
13:42 aixsyd Volumes #1 and #2&3 are going to be two different Clusters.
13:42 RedShift + I've had kernel panics when infiniband links go down
13:42 RedShift (on CentOS 6.4)
13:43 samppah aixsyd: what software you used to draw that?
13:43 RedShift it was easily reproduced stopping the infiniband software, I don't recall the actual service, openib I think?
13:43 ndk joined #gluster
13:43 aixsyd Gliffy :)
13:43 aixsyd RedShift: would it panic on both nodes?
13:44 RedShift no, but what if your good node panics? :-)
13:45 aixsyd Trying to think of when/how that'd happen.. so say node #2 goes down, poweroff - the link is down at that point. why would it panic?
13:45 aixsyd on node #1
13:45 RedShift why? because there's a bug in the ipoib kernel module...
13:46 aixsyd for Cent
13:46 RedShift I didn't take the time to investigate it (but it was easily reproduced) because we went with other storage means
13:46 aixsyd Since my bare metal will be Proxmox (Debian) - think it's still an issue?
13:46 RedShift I can't tell, you'll have to try
13:47 chirino joined #gluster
13:48 aixsyd hmm.
13:48 aixsyd Think I'm dumb for making the actual gluster nodes virtual inside proxmox?
13:48 aixsyd My goal with that is both backup of the OS disk via Snapshot - restore it quickly if needed, as well as template the OS before installing Gluster to add a new node for new storage quickly
13:49 RedShift no
13:49 aixsyd aka, new bricks
13:49 RedShift if I were to build a new gluster setup, I would virtualize them too
13:49 aixsyd okay, good. awesome.
13:49 aixsyd I really wanna attempt to use oVirt for cluster management
13:50 aixsyd i'm not the only person that may need to admin this, and my boss is CLI deaf.
13:50 RedShift hmm I don't know if snapshots are a good idea. I think they are if you trigger a self-heal for all the files as soon as possible
13:50 RedShift maybe ask that question on the mailing list
13:50 aixsyd snapshot would only be fore the base OS - not the data inside the brick
13:51 aixsyd the VM would have an OS virtual disk and a data(brick) virtual disk
13:51 aixsyd snapshot the OS, no backup of the brick.
13:52 RedShift should be ok as long as no configuration changes occur on the other nodes
13:52 aixsyd the cluster is the brick backup - plus my backup system would be doing the actual backup of files ON the volume: http://i.imgur.com/2baGDbm.jpg
13:52 harish_ joined #gluster
13:52 aixsyd that graphic was when I was still going to use DRBD, but its essentially the same
13:53 RedShift if you're spending all that money on storage bricks, why not get an extra switch?
13:53 DV__ joined #gluster
13:55 ccha hello, gluster-server 3.4.1 doesn't surpport glusterfs-client 3.3.2 ?
13:55 aixsyd Well, the budget has two parts - one is the base hardware and the other is the media/HDD's
13:56 aixsyd The only way to sell this to management is to have a low hardware cost - I can do this hardware wise for about $1000, sans hard drives. They dont really care how much the drives cost. They're retarded
13:57 aixsyd I wanted an infiniband switch. but $2500 used... no way
13:57 RedShift switches are cheap...
13:58 RedShift a cisco 2960S-24TS-S is... I don't know 600 euro or something?
13:58 dbruhn Did I see that you were doing all 1GB?
13:59 aixsyd with LACP trunking, yes
13:59 dbruhn and are you planning on DHT volumes, or DHT+AFR?
14:00 dbruhn I just picked up a used cisco 4948 for $400 on ebay not to long ago
14:00 dbruhn and my 20GB IB switch used was $350 for 24 ports of DDR IB
14:00 ccha from a server with glusterfs-client 3.3.2 I can't mount a glusetrfs 3.4.1 volume
14:00 dbruhn cards were cheap enough to, I found a stack of DDR ones used for $25 each
14:01 dbruhn ccha, are you using 3.4.1 specific functions?
14:01 ccha dbruhn: I'don't think so
14:01 RedShift I have some 10 GBit dual port IB cards lying around as well
14:01 ccha I just created a replica 2 volume
14:01 aixsyd dbruhn: DHT?
14:02 dbruhn aixsyd, distributed hash table
14:02 aixsyd Never came across that lingo in the training docs..
14:02 aixsyd so i'm not sure?
14:03 dbruhn when you created your volume did you create a replication only volume?
14:04 aixsyd dbruhn: havent created anything yet, thankfully :P
14:04 aixsyd what would you recommend?
14:04 dbruhn sorry a little late to the conversation here, what are you trying to accomplish
14:05 dbruhn I see two storage clusters
14:05 dbruhn one storing qemu disk images?
14:05 aixsyd Essentially, i'm looking for HA and failover. I currently have two QNAP servers, ones a basic CIFS file server, ones an iSCSI/NFS VM storage server
14:05 aixsyd we essentially want a network RAID1 of each server
14:06 dbruhn ahh
14:06 aixsyd in the event one server goes completely down (for whatever reason) we dont lose downtime
14:07 aixsyd problem is, however, we have a very limited budget.
14:07 dbruhn So you are mounting these storage device to the gluster servers and re-serving them
14:08 dbruhn How much storage do you need?
14:08 RedShift well not to kick in any windows here, but have you thought about other solutions like an HP P2000 or something? those have redundant controllers
14:08 aixsyd 4TB useable on each "server"
14:08 bugs_ joined #gluster
14:08 dbruhn Redshift has a point
14:08 RedShift there's a whole range of entry level redundant SAN boxes like the EMC VNXe
14:09 aixsyd doesnt that have a single, common point of failure, however?
14:09 dbruhn the only other thing I can think of is getting you in touch with a grey market/used server sales person and finding you some servers. I bought 10 dual price 4x core servers with 600gb sas used for 230$ each
14:09 RedShift yes, the backplane is not redundant, but you have to consider other variables, like complexity and RTO
14:09 dbruhn you could probably get into something similar with bigger SATA drives
14:10 aixsyd dbruhn: we can get the physical boxes for pretty cheap. Dual Quad Core Xeon 2.5ghz w/ 16GB ram for about $150 each
14:10 dbruhn if you are dead set on Gluster as a storage solution
14:10 dbruhn Ditch the qnap's then
14:10 RedShift plus the backplanes are simple and I've never seen them fail before, but granted, it is a SPOF
14:11 dbruhn and build a 1GB Gluster Cluster, or a 20GB DDR IB cluster
14:11 dbruhn and just have a second IB switch on hand
14:11 aixsyd IB switches are too costly for the budget. one alone would be the entire project budget
14:11 dbruhn use make a distributed volume with replication, and you'll be able to lose a node whenever
14:12 dbruhn a used 20GB IB switch without built in subnetmanager is like $200/$250
14:12 dbruhn one with a subnet manager is like $300/400
14:12 dbruhn you said your budgets's 1K
14:12 aixsyd really? link/model?
14:12 aixsyd i didnt see any for sub $2500
14:13 dbruhn http://www.ebay.com/itm/Voltaire-ISR-9024-2​4-Port-Infiniband-Switch-/121186178707?pt=U​S_Network_Switches&amp;hash=item1c37425693
14:13 glusterbot <http://goo.gl/bbAx2z> (at www.ebay.com)
14:13 dbruhn http://www.ebay.com/itm/Voltaire-Mellano​x-ISR9024D-M-Internally-Managed-Infiniba​nd-Switch-ISR-9024D-/161100156077?pt=LH_​DefaultDomain_0&amp;hash=item2582514cad
14:13 glusterbot <http://goo.gl/QsG1mU> (at www.ebay.com)
14:13 aixsyd oh shit..
14:13 dbruhn http://www.ebay.com/itm/Voltaire-Grid-S​witch-ISR-9024-10Gbps-InfiniBand-ISR-90​24S-Novia-Networks-/161134888513?pt=LH_​DefaultDomain_0&amp;hash=item2584634641
14:13 glusterbot <http://goo.gl/rwEXVz> (at www.ebay.com)
14:14 dbruhn but, something to keep in mind, you are going to want to run IPoverIB if you do Gluster 3.4
14:14 dbruhn for the QEMU support
14:14 aixsyd right right
14:14 aixsyd transport would be RDMA, ya?
14:14 dbruhn or you are going to want to run 3.3.2 for the IB support and then you lose the QEMU support
14:14 danci1973 About a year ago we bought two 24-port 4x DDR switches for GBP250 each (Flextronics F-X430046)
14:14 dbruhn RDMA is not functional in the 3.4 tree yet
14:14 aixsyd isnt it? D:
14:15 aixsyd nooooooooooooooooooo
14:15 aixsyd Proxmox only supports 3.4
14:15 dbruhn hahaha
14:15 aixsyd nooooooooooooooooooooooooo​ooooooooooooooooooooooooo
14:15 dbruhn well you can be like the rest of us and try and figure it out, but there are probably a good 5 of us in the room regularly waiting on it as well
14:16 * danci1973 is currently trying to get RDMA to work...
14:16 aixsyd what if it just served NFS instead of native glusterfs?
14:16 dbruhn danci1973, need any help? I have some test systems that I can run it on
14:17 danci1973 dbruhn: Yeah, I need help.
14:17 dbruhn aixsyd, I have never used the NFS portion of gluster. I need the read speed advantage of the fuse client
14:17 dbruhn What can I do to help?
14:17 aixsyd dbruhn: so would we :'(
14:18 dbruhn I'll be honest your drive counts lead me to believe you don't really need speed.
14:18 danci1973 dbruhn: Well... Using 'rping' and other tools from OFED I checked RDMA and it basically works... But how can I see if my Gluster is actually using RDMA ?
14:18 wushudoin joined #gluster
14:18 aixsyd what sort of speed differential is there between FUSE and NFS?
14:19 danci1973 dbruhn: I did 'mount.glusterfs -o transport=rdma ....' and it mounts, but then when I use the mount, I can see a lot of TCP/IP traffic on my 'ib0'. If it was using RDMA, I wouldn't see that, right?
14:19 dbruhn danci1973, you can explicitly tell the client to mount using RDMA like this
14:19 dbruhn mount -t glusterfs ENTSNV04001EP:/ENTV04EP.rdma /mnt/ENTV04EP
14:19 dbruhn notice the .rdma
14:20 danci1973 dbruhn: Yes, that's what the '-o transport=rdma' does too.
14:20 danci1973 dbruhn: This is from 'mount': gluster1:vm_store.rdma on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072)
14:20 dbruhn danci1973, RDMA is not working in 3.4.1, not sure if it's falling back
14:21 aixsyd dbruhn: I can still run tcp over IB w/ 3.4.1, yes?
14:21 dbruhn yep
14:21 danci1973 dbruhn: Did it work on older versions?
14:21 danci1973 dbruhn: I mean - did it break in 3.4.1 or it's not working - yet ? :)
14:22 dbruhn jclift told me it's broke
14:22 dbruhn I haven't had time to test it
14:22 danci1973 dbruhn: 3.4.0 works?
14:23 dbruhn I don't think 3.4.x works at all, but like I said I haven't been able to test at all.
14:24 danci1973 dbruhn: I see... Anyway - are there significant performance improvements vs. TCP?
14:24 danci1973 dbruhn: Or 'should there be'?
14:24 dbruhn RDMA has less overhead and better latency
14:25 dbruhn I have not tested IP over IB to compare to RDMA
14:25 aixsyd but its still worth running as opposed to gbe, yes?
14:25 dbruhn but 10GBe vs IB with RDMA is a huge performance improvement
14:25 lalatenduM joined #gluster
14:26 dbruhn well fasters always better... right?
14:26 aixsyd :P
14:26 danci1973 Cause right now I have two servers that have several internal drives with a HW RAID controller and locally Bonnie++ reports 260MB/s write and 600MB/s read...  Running bonnie on a Gluster client through ipoib, I get 16MB/s write and 36MB/s read...
14:26 aixsyd =o
14:27 gluster-favorite Hi, guys! I have some problem (features/bag). When my client connect to Gluster-share (3.3.2) over NFS, sometimes, when client do more-less production load, my share on client freezes , when i do # df -h  . Does anybody know what to do with this ? Can I tuning NFS connection with Gluster share on server side?
14:29 aixsyd dbruhn: heres a good question for you - whats the upgrade path look like for gluster? Say i get all up and running with 3.4.1 - and say v4 comes out (lol) - what sort of downtime is there for upgrading? does anything have to be reconfigured on the server side?
14:30 X3NQ joined #gluster
14:30 lalatenduM_ joined #gluster
14:30 RedShift you can just use IPoIB... no need for IB support in gluster
14:30 dbruhn in 3.4 they have built in backward compatibility for upgrades, so the older client is supposed to be able to connect to the newer server, etc. Not sure if it's working, but in theory you should be able to do in place upgrades. I personally will still schedule down time until that more proven and reliable.
14:31 aixsyd good call
14:31 aixsyd RedShift: whew.
14:32 danci1973 left #gluster
14:32 danci1973 joined #gluster
14:32 abyss^ I have created storage in replicated. Now I turn off the first glusterfs on node1 and then my mount point is umount... Why? I thought when I turn off the first glusterfs the mount point would be still mounting because fuse take info about files from second glusterfs...
14:32 RedShift it works pretty good, I managed 15 gbps with iperf
14:32 aixsyd what type of card?
14:32 aixsyd DDR/QDR?
14:32 RedShift 4x DDR with CX4 connectors
14:32 aixsyd nice
14:33 RedShift that's 16 gbps on paper so 15 gbps with IP & TCP overhead I'd say... pretty good!
14:33 aixsyd not bad..
14:35 RedShift let me look up that part
14:35 RedShift http://www.benl.ebay.be/itm/360657396651?ssPageN​ame=STRK:MEWNX:IT&amp;_trksid=p3984.m1439.l2649
14:35 glusterbot <http://goo.gl/692kfc> (at www.benl.ebay.be)
14:36 lpabon joined #gluster
14:38 aixsyd RedShift: my god, that is cheap
14:38 aixsyd i want to know why IB is not more commonplace
14:38 RedShift where are you from?
14:38 aixsyd US
14:38 RedShift well it is second hand... and IB is pretty common in clusters
14:39 RedShift I have two of them but I'm not going to ship to the US >_<
14:39 aixsyd :P nah, thats fine
14:39 aixsyd theres equally priced ones in the US
14:40 aixsyd just looking at the price of 10GBE & 10GB fiber VS the price of IB equipment... even outside of clusters - why not run IB?
14:43 aixsyd anyone here use ovirt to manage gluster clusters?
14:46 * danci1973 wonders why he's getting only 6 Gbps through IPoIB... :(
14:46 RedShift danci1973 you'll need to tune the kernel TCP buffers and I set the MTU to 32768 on the link
14:46 dbruhn To get an IB switch that can bridge to ethernet is expensive, so you need to put some sort of gateway or bridge in place
14:47 dbruhn on the IB vs ethernet question
14:48 danci1973 RedShift: I have set /sys/class/net/ib0/mode to 'connected' and MTU is set to 65520 .
14:48 danci1973 RedShift: Not sure about kernel TCP buffers, though...
14:48 RedShift ok now tune the buffers
14:49 RedShift http://landley.net/kdocs/ols/2​009/ols2009-pages-169-184.pdf
14:49 glusterbot <http://goo.gl/Scno1Q> (at landley.net)
14:50 Guest54292 joined #gluster
14:50 RedShift try these numbers: http://dak1n1.com/blog/7-per​formance-tuning-intel-10gbe
14:50 glusterbot <http://goo.gl/0zAAOh> (at dak1n1.com)
14:50 RedShift (method 2 sysctl)
14:51 RedShift if you want more tuning parameters, just google linux tune 10 gb or something
14:51 RedShift most instructions count for IB cards as well, plus tuning the TCP stack is independent of the underlying transport being used
14:53 manik joined #gluster
14:53 danci1973 RedShift: No change with those numbers...
14:54 RedShift hmm, what hardware are you running on?
14:54 danci1973 RedShift: You mean IB or general?
14:54 RedShift both
14:54 RedShift my processor was being maxed out by then, at those speeds, raw processing power matters
14:56 ccha I created a replicated volume on ghlusterfs 3.4.1
14:56 danci1973 RedShift: Both servers are equal: Intel Xeon E5520 (quad core @2,27GHz), Intel S5520UR server board, Voltaire HCA 4X0 (Mellanox MT25208) IB host adapter, connected directly
14:57 ccha but when I mount this volome from glusterfs client 3.3.2, I got this (1 -> 1) doesn't support required op-version (2). Rejecting volfile request.
14:57 mohankumar joined #gluster
14:57 RedShift danci1973 what OS?
14:57 RedShift my processor was a X5570 (2,8 GHz) on both sides
14:58 ira joined #gluster
14:58 danci1973 RedShift: OpenSuSE 12.3 for now. Compiled stuff from OFED 3.5 myself
14:58 RedShift hmm I use CentOS which uses a different IB stack
14:58 danci1973 RedShift: Apart from what's already provided with the kernel.
14:58 RedShift maybe that's the difference
14:58 danci1973 RedShift: Different IB stack??
14:59 RedShift I used opensm
14:59 danci1973 So do I.
14:59 RedShift then... hmm
14:59 danci1973 I'm in such an early phase I can try out CentOS as well... :)
14:59 RedShift heh why not
14:59 RedShift takes two minutes to install...
15:01 dbruhn I am on red hat 6.4 and my stuff with IB seems fine
15:01 dbruhn the IB itself is actually fairly stable
15:03 danci1973 Hmm... I just ran 'ibv_devinfo -v' and noticed this: active_speed:           2.5 Gbps (1), active_width:      4x (2)
15:06 jclift joined #gluster
15:06 danci1973 Does that mean this card is SDR and not DDR ??
15:11 harish_ joined #gluster
15:12 rjoseph joined #gluster
15:18 verdurin_ joined #gluster
15:18 P0w3r3d joined #gluster
15:19 DV__ joined #gluster
15:19 lpabon_ joined #gluster
15:21 aixsyd besides ovirt - are there any other GlusterFS Gui managers?
15:21 zerick joined #gluster
15:21 aixsyd and can anyone tell me why the heck the Gluster Storage Platform became discontinued? It looked so good in the Youtube videos...
15:28 jclift left #gluster
15:34 dbruhn Why do you need a guy manager? the stuff isn't hard to work with
15:34 dbruhn gui
15:42 glusterbot New news from newglusterbugs: [Bug 921215] Cannot create volumes with a . in the name <http://goo.gl/adxIy>
15:45 Remco dbruhn: For people that need to manage gluster, but are not comfortable with the CLI
15:49 social any gluster dev around?
15:49 dbruhn aixsyd, the storage platform UI i believe was broken when the cluster command line interface was introduced, I also believe that was right around when red hat purchased gluster.
15:49 aixsyd :'(
15:49 social I need some pointers, for example *ptr
15:49 dbruhn What functionality do you want to give your boss that you need the UI for?
15:49 aixsyd dbruhn: its not that its hard to work with - my manager may need to do stuff with it - and hes CLI-deaf
15:50 aixsyd anything.
15:50 aixsyd he can barely map a windows network drive
15:50 aixsyd yet i gotta give him keys to the Ferrari
15:50 aixsyd >.<
15:50 dbruhn Do you expect him to trouble shoot the system at any point in time?
15:51 aixsyd i hope to god not, but possibly
15:51 dbruhn It sounds like you have a guy who shouldn't be managing your storage that you are worried can't manage your storage
15:51 dbruhn managing your storage
15:51 social I have oomkill killing glusterd so there is some memleak, I correlated memleak with some log messages from 2 functions being called in loop and I'd like to find out from where they could be called in loop :/
15:51 aixsyd dbruhn: yep.
15:52 dbruhn I usually just make simple how to guides that are step by step for people like that, and then offer to do it for them if they are not comfortable.
16:03 ababu joined #gluster
16:30 bennyturns joined #gluster
16:31 Mo__ joined #gluster
16:31 bharata-rao joined #gluster
16:33 manik joined #gluster
16:52 rotbeard joined #gluster
16:57 vpshastry joined #gluster
17:03 B21956 joined #gluster
17:03 B21956 left #gluster
17:11 hflai joined #gluster
17:14 pdrakeweb joined #gluster
17:25 kkeithley_ aixsyd:  if by Gluster Storage Platform you mean the old GUI, it wasn't so much that it was killed—— A strategic decision was made to build gluster management into a unified GUI and the RHEVM/ovirt console was much more mature. Back when I was at EMC the biggest complaint we had was that every product had a different management interface. We're trying not to make that same mistake.
17:25 kkeithley_ social: what's your oomkiller loop?
17:25 aixsyd kkeithley_: understood.
17:26 aixsyd though, it would be nice to have a GUI add-on if wanted thats officially supported
17:27 aixsyd now that the CLI has been unified, and wont change dramatically, i'd think a new gui could be made, or the old one ported, no?
17:27 kkeithley_ Well, here in the community, people are free to write something. If you do it on forge.gluster.org and it's popular enough, who knows, it might turn into something that's supported.
17:28 aixsyd i'll have something enterprise ready in about an hour ^_^
17:28 kkeithley_ ;-)
17:28 aixsyd :P
17:30 bulde joined #gluster
17:30 social kkeithley_: I have glusterd being killed by oom-killer and last thing I can see in logs is ton of E [glusterd-op-sm.c:5261:glusterd_op_sm] 0-management: handler returned: -1 and E [glusterd-utils.c:362:glusterd_unlock] 0-management: Cluster lock not held!
17:31 social kkeithley_: I'd like to find out when these are called (probably gluster volume status/gluster volume heal?)
17:36 failshell joined #gluster
17:43 dustin2 joined #gluster
17:45 nhm joined #gluster
17:48 aixsyd if rdma doesnt work in 3.4.1, why is there a glusterfs-rdma-3.4.1 rpm package? o.O
17:48 ira joined #gluster
17:48 kkeithley_ social: let me look.
17:48 DV__ joined #gluster
17:50 social kkeithley_: note gluster 3.4.0
17:51 aixsyd dbruhn: http://pastebin.com/T7wqkvcd  Any idea why on FC19, glusterd fails to start? fresh install
17:51 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:52 hagarth joined #gluster
17:52 aixsyd http://fpaste.org/50536/13831555/
17:52 glusterbot Title: #50536 Fedora Project Pastebin (at fpaste.org)
17:57 diegows_ joined #gluster
17:58 Kodiak1 joined #gluster
18:00 aixsyd this is insane, its happening on two different VM's.. the glusterd service wont start. keeps failing WAT
18:01 Kodiak1 Question: I've pored over the roadmaps for 3.5, I don't see strong authentication as even a "nice to have" possible feature.  For 3.4 it at least made the list of "nice to have" in the form of Kerberos.  Is anyone on the project listening on this channel that might be able to let me know if there is a status, or a prayer of this happening in Gluster 3.5 or 3.6 or any time in the reasonable future?  I'd super like to replace OpenAFS w/ Gluster!
18:01 vpshastry left #gluster
18:04 kkeithley_ social: I don't seen any explicit looping that would result in anything ultimately making that log entry. Are we talking about hundreds of log entries?
18:06 dbruhn aixsyd, did you check your logs? /var/log/glusterfs
18:06 dbruhn there are a couple different logs in there to look at
18:07 aixsyd sec
18:11 kkeithley_ johnmark: ^^^ see Kodiak1's comment. I'd certainly think our "community governance" would dictate that features that missed in 3.x would get pushed into 3.x+1.
18:14 paulyd joined #gluster
18:15 kkeithley_ Not that that would guarantee they won't slip again.
18:17 Kodiak1 Hey thanks Kaleb.  I'll be watching for GSSAPI support closely since we *really* want to stop managing OpenAFS at some point and Gluster at face seems closest to a reasonable successor.
18:25 paulyd hooks in /var/lib/glusterd/hooks/1 seemed to be picked up during gluster volume lifecyles. are you supposed to be able to create a /var/lib/glusterd/hooks/2 ?
18:26 kkeithley_ social: if you can bump up the log-level we should see more log entries from the callers and get better visibility into what's looping.
18:34 aixsyd dbruhn: http://fpaste.org/50552/13831580/
18:34 Skaag joined #gluster
18:34 glusterbot Title: #50552 Fedora Project Pastebin (at fpaste.org)
18:34 aixsyd WAT.
18:42 kkeithley_ aixsyd: off hand I'd guess that selinux is not allowing glusterd to create its listener socket. As an experiment you could try editing the selinux config (in /etc/selinux/config and change it to permissive or disabled. You did say this was on an f19 box, right?
18:42 dbruhn aixsyd, do you have iptables and selinux enabled?
18:43 kkeithley_ FYI, f19 uses firewalld, not iptables
18:44 dbruhn oh neat, more things to learn
18:44 kkeithley_ tell me about it
18:44 kkeithley_ :-(
18:44 dbruhn kkeithley, have you ever seen an issue where files will show up in the filesystem twice, but have the same inode?
18:45 kkeithley_ nope, haven't seen that
18:46 dbruhn http://fpaste.org/50555/58785138/
18:46 glusterbot Title: #50555 Fedora Project Pastebin (at fpaste.org)
18:46 dbruhn this is the second time I've seen this happen
18:46 dbruhn creates hell on my application stack on top of it all
18:53 daMaestro joined #gluster
18:54 semiosis dbruhn: usually that's called a hard link
18:54 dbruhn if I unmount and remount the storage it will usually go away
18:55 dbruhn it's in the gluster mount, not on the file system underneath
18:56 semiosis now it sounds less like a hard link
18:56 kkeithley_ dbruhn: this on a RHEL (or CentOS) box? What kernel? What's the brick file system?
18:57 dbruhn Linux ENTSNV04001EP 2.6.32-358.23.2.el6.x86_64 #1 SMP Sat Sep 14 05:32:37 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux
18:57 dbruhn rhel 6.4
18:57 dbruhn xfs
18:57 kkeithley_ gluster 3.4.1?
18:58 dbruhn 3.3.2
18:58 dbruhn RDMA
18:59 dbruhn and it doesn't show up that way consistently on all of the mounts
19:06 aixsyd kkeithley_: dbruhn: default install - so i'd assume both are turned on
19:06 dbruhn aixsyd disable selinux, and turn off the firewall and see if the issue persists
19:08 aixsyd roger that. firewall was off, but not selinux. i assume a reboot is in order
19:09 dbruhn that will do what you need it to
19:09 Kodiak1 Question:  For someone constrained to running Gluster on top of ISCSI mounts, can you grow the brick that occupies the ISCSI LUN after you've grown the LUN or would I have to instead add LUNs, then define them as additional bricks to grow?
19:12 aixsyd that was it!
19:14 dbruhn congrats
19:15 JoeJulian Kodiak1: either one will work.
19:16 Kodiak1 Thanks JoeJulian, it's good to know that I can grow the bricks as growing a single LUN is the customary method offered by my storage admins.
19:21 lpabon joined #gluster
20:08 lpabon_ joined #gluster
20:34 zerick joined #gluster
21:03 aixsyd i'm sorry, but ovirt 3.3 is SUPER buggy.
21:03 aixsyd especially with FC19
21:06 cjh973 left #gluster
21:16 harish_ joined #gluster
21:21 Skaag joined #gluster
21:24 social kkeithley_: the issue appears randomly and only on production \o/ we are talking about several tousands in ~1minute
21:26 JoeJulian social: I just scrolled back. 3.4.0 had a few places with race conditions (fixed in 3.4.1) that I think might be able to cause that.
21:37 hngkr_ joined #gluster
21:39 social JoeJulian: well we'll switch to 3.4.1 soon so I hope that will make the thing go away :)
21:56 dbruhn What would prompt an I/O error (split brain) and then when you go back to check it not throw the error anymore.
21:58 JoeJulian The only thing I can think of is a source came available that allowed it to self-heal.
21:59 JoeJulian I'm doing the BOFH thing today. An underling came asked me to confirm their solution to a problem where the print down the left side of the paper is very light. I told them to ask the user to take the paper out of the tray and turn it over and see if the problem moves to the other side. <evil grin>
22:00 dbruhn Printer is a sitting at an angle isn't it....
22:00 JoeJulian No, probably a drum or a bad toner.
22:01 JoeJulian but I love how gullible these people are sometimes.
22:01 dbruhn lol, I had a sales employee once who demanded his own printer. His desk was a mess always, one day I went and couple notebooks under the one side when I knew the toner was low.
22:02 dbruhn he finally started using the printer everyone else did after that
22:04 JoeJulian Last April fools day, I posted on the intranet that for some reason the bit bucket was low on capital 'S'es and that I was running a script to refill them, but to allow it to catch up, please only use lower-case ones for the rest of the day.
22:13 dneary joined #gluster
22:13 dbruhn nice
22:16 y4m4 joined #gluster
22:23 glusterbot New news from resolvedglusterbugs: [Bug 951551] license: xlators/protocol/server dual license GPLv2 and LGPLv3+ <http://goo.gl/a1y1LE>
22:26 JoeJulian Hey, I never approved my code to be GPLv2....
22:27 * JoeJulian is just kidding. :D
22:27 JoeJulian kkeithley: ^ :P
22:29 a2 those were files which were missed out (accidentally) while re-licensing
22:31 JoeJulian Though I'm joking around, with contributions outside of Red Hat, RH Legal should probably be consulted to determine if outside contributors need to sign off on the license change.
22:32 JoeJulian if there were any
22:32 a2 due process was followed
22:32 JoeJulian cool
22:32 a2 the patch had just missed out some files back then, which was discovered later
22:34 JoeJulian Sure, but if I, for example, agreed with LGPLv3 but not GPLv2 and contributed only to the files that were mislabeled, I could potentially be a problem and my spelling corrections might have to be reverted. ;)
22:34 calum_ joined #gluster
22:34 JoeJulian That was my only worry.
22:39 a2 the files which were modified had no external changes since RH legal's review and approval
22:43 glusterbot New news from newglusterbugs: [Bug 1004519] SMB:smbd crashes while doing volume operations <http://goo.gl/DMsNHh>
22:55 mjrosenb this isn't good.
22:55 mjrosenb I put a gluster line in /etc/fstab, and now it hangs rather than mounting the filesystem
22:59 mjrosenb oh, goodie.  it isn't just when run from fstab.
22:59 mjrosenb this makes it much easier to debug.
22:59 dbruhn lol
23:01 mjrosenb ok, this is odd (I think)
23:01 dbruhn Do tell
23:02 mjrosenb /usr/sbin/glusterfs --volfile-id=magluster --volfile-server=memoryalpha /data is calling /bin/mount?
23:04 dbruhn not sure to be honest
23:08 JoeJulian should be the other way around
23:08 ira joined #gluster
23:08 JoeJulian you know about "ps -f", right?
23:09 twx joined #gluster
23:09 mjrosenb JoeJulian: yeah, it went back and forth a couple of times
23:10 mjrosenb let me get it again.
23:16 twx joined #gluster
23:22 twx joined #gluster
23:22 mjrosenb https://gist.github.com/7242012
23:22 glusterbot Title: xcut (at gist.github.com)
23:22 mjrosenb there we go.  it is cut off a bit because my terminal isn't wide enough.
23:23 JoeJulian Oh, ok. I guess that makes sense.
23:23 JoeJulian but it's hanging there?
23:23 mjrosenb https://gist.github.com/7242019 there we go, that gets all of the arguments.
23:23 glusterbot Title: xcut (at gist.github.com)
23:23 mjrosenb yup.
23:25 mjrosenb https://gist.github.com/7242035
23:25 glusterbot Title: xcut (at gist.github.com)
23:25 JoeJulian Standard stuff: volume's started and reachable, check client log, audit.log...
23:26 mjrosenb what port(o) does gluster normally use?
23:26 JoeJulian @ports
23:26 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
23:26 mjrosenb i'm gong to nmap t.
23:27 mjrosenb https://gist.github.com/7242060
23:27 glusterbot Title: xcut (at gist.github.com)
23:28 JoeJulian hostname resolution?
23:29 mjrosenb well, I just nmapped it by hostname, so hopefully it is working.
23:29 JoeJulian selinux?
23:31 JoeJulian shouldn't be that...
23:32 JoeJulian What version are your servers?
23:34 mjrosenb 3.3
23:35 JoeJulian timeout... could mean that the TCP stack is responding but glusterd is not. Try restarting glusterd.
23:35 chirino joined #gluster
23:42 mjrosenb there are other clients that *can* access the volume right now.
23:42 mjrosenb is there any part of the protocol that I can type out by hand in telnet?
23:45 mjrosenb here's the strange thing...
23:45 JoeJulian mjrosenb: other clients will continue to be able to access the volume even if glusterd was hung.
23:45 mjrosenb rather, *a* strange thing
23:46 mjrosenb I ran mount about 20 minutes ago, and it still hasn't returned.
23:46 JoeJulian probably will in about 10.
23:46 mjrosenb there's nothing new in the logs.
23:47 JoeJulian frame_timeout is 30 minutes
23:47 JoeJulian nothing /should/ ever get there.
23:47 mjrosenb ahh, that's new.
23:47 mjrosenb https://gist.github.com/7242274
23:47 glusterbot Title: xcut (at gist.github.com)
23:49 mjrosenb as soon as I strace -p 15109, everything exits.
23:50 mjrosenb and it says 'failed to mount'
23:51 mjrosenb JoeJulian: you said that calling mount from within mount made sense.  what is it doing?
23:54 JoeJulian mount, calls mount.glusterfs which starts the glusterfs userspace client which calls mount.fuse to create the mountpoint. You->FUSE->glusterfs->network
23:55 JoeJulian I've never seen it happen, though, since it happens so fast.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary