Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 zerick joined #gluster
00:04 robo joined #gluster
00:05 kam270 joined #gluster
00:13 kam270 joined #gluster
00:14 chirino joined #gluster
00:14 velladecin Hey guys, a silly question. When I'm fuse mounting Gluster the only process that is then running on the client is smth like: /usr/sbin/glusterfs --fuse-mountopts=nodev,noatime.....
00:14 velladecin there is no need to have glusterd running or similar?
00:21 kam270 joined #gluster
00:24 robo joined #gluster
00:53 Mattlantis velladecin, that's right, glusterd and similar are only needed on your gluster server, the fuse driver is all for the clients
00:55 velladecin thanks | Mattlantis
00:55 johnmwilliams__ joined #gluster
00:56 fyxim_ joined #gluster
00:57 brandonmckenzie joined #gluster
01:00 YazzY joined #gluster
01:00 YazzY joined #gluster
01:02 trmsbrandon Hi, I'm a new gluster user and I'm having an issue.  I'm trying to set up gluster on Ubuntu 13.10, using ZFS as my back-end file system.  If I create a gluster volume using an ext4 filesystem, everything works fine.  If I create it using ZFS I get "E [graph.c:526:glusterfs_graph_activate] 0-graph: init failed" in the brick's log.  Anyone have any suggestions?
01:14 johnmwilliams__ joined #gluster
01:16 fyxim_ joined #gluster
01:22 bazzles joined #gluster
01:32 tokik joined #gluster
01:37 tokik_ joined #gluster
01:39 tokik joined #gluster
01:49 Ponyo joined #gluster
01:50 hagarth joined #gluster
01:51 Ponyo Does the gluster server have problems with more than one net going into a box?  I have two networks in my gluster server, a 10.x and a 192.x.  Anything on the 192. side can see the gluster server without a problem.  Anything in the 10.x network can't see gluster at the direct ip, and going via nat into the 192.x network I can't access the share either
01:52 Ponyo netstat says gluster is listening on all interfaces for that port
01:54 tokik joined #gluster
01:55 Ponyo https://www.youtube.com/watch?v=qVoXUGolOlo
01:55 glusterbot Title: The Skatalites - Live At Lokerse Feesten - YouTube (at www.youtube.com)
01:56 Ponyo oh oops wrong room sorry
01:56 tokik joined #gluster
02:08 Alex Ponyo: do you see the traffic back to the 10 address going via the 192 interface?
02:08 Alex (use tcpdump to verify - tcpdump -A -s0 -i any > dump, do some traffic, etc)
02:09 Ponyo Alex: one moment, i'll do some sleuthing, i did notice that when i told the client in the 10. network to goto the 10. address it tried to goto the 192. address
02:09 Ponyo 100->192 is a basic nat break, and i can communicate to the internet and other clients in the 192 network ( from the 10 network ) just fine
02:10 Ponyo i see the gluster client try to goto the 192. address in the gluster logs, that was my only tip
02:14 Elico left #gluster
02:16 cjanbanan joined #gluster
02:26 Ponyo Alex: Couldn't I limit tpcdump tp port 24007 as well or does it cascade though ports?
02:26 Alex lazy habit, Ponyo :)
02:28 haomaiwa_ joined #gluster
02:31 Ponyo Alex: http://hastebin.com/didaxitohi.avrasm .254 is the gluster server, the .16 is the client
02:31 glusterbot Title: hastebin (at hastebin.com)
02:32 m0zes joined #gluster
02:32 Ponyo hang on, i added some of your flags
02:32 Ponyo http://hastebin.com/xenukuyugu.hs
02:32 glusterbot Title: hastebin (at hastebin.com)
02:33 Ponyo Alex: I figured it out
02:34 Ponyo Dns on the client was resolving the dns to the other ip address, a quick hack of the hosts file and i'm in business
02:34 Ponyo thank you for your advice
02:38 lalatenduM joined #gluster
02:41 harish_ joined #gluster
02:56 jmarley joined #gluster
02:56 m0zes joined #gluster
03:10 bharata-rao joined #gluster
03:17 chirino joined #gluster
03:18 pdrakeweb joined #gluster
03:25 RameshN joined #gluster
03:28 DV joined #gluster
03:42 shubhendu joined #gluster
03:43 davinder2 joined #gluster
03:44 purpleidea joined #gluster
03:46 itisravi joined #gluster
03:48 glusterbot New news from newglusterbugs: [Bug 1077452] Unable to setup/use non-root Geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1077452>
03:55 jag3773 joined #gluster
03:59 m0zes joined #gluster
04:00 mohankumar joined #gluster
04:25 ngoswami joined #gluster
04:29 pk1 joined #gluster
04:36 mohankumar joined #gluster
04:40 mohankumar joined #gluster
04:41 hagarth joined #gluster
04:44 bala joined #gluster
04:45 ndarshan joined #gluster
04:52 ppai joined #gluster
04:57 lalatenduM joined #gluster
04:57 nshaikh joined #gluster
04:57 ravindran joined #gluster
04:57 haomai___ joined #gluster
05:03 vpshastry joined #gluster
05:03 dusmant joined #gluster
05:07 dorko joined #gluster
05:07 Elico joined #gluster
05:08 davinder joined #gluster
05:08 kdhananjay joined #gluster
05:11 prasanthp joined #gluster
05:13 Ark joined #gluster
05:13 calum_ joined #gluster
05:19 shubhendu joined #gluster
05:19 RameshN joined #gluster
05:21 davinder2 joined #gluster
05:21 dorko joined #gluster
05:23 vkoppad joined #gluster
05:23 meghanam joined #gluster
05:23 meghanam_ joined #gluster
05:23 ndarshan joined #gluster
05:24 vpshastry joined #gluster
05:25 bala1 joined #gluster
05:27 shylesh joined #gluster
05:36 davinder joined #gluster
05:45 raghu joined #gluster
05:48 cyber_si joined #gluster
05:56 cjanbanan joined #gluster
05:59 aravindavk joined #gluster
06:04 davinder joined #gluster
06:04 dorko joined #gluster
06:06 rahulcs joined #gluster
06:16 dorko joined #gluster
06:18 gdubreui joined #gluster
06:19 Philambdo joined #gluster
06:21 satheesh1 joined #gluster
06:23 vimal joined #gluster
06:24 andreask joined #gluster
06:28 mohankumar joined #gluster
06:30 psharma joined #gluster
06:34 FarbrorLeon joined #gluster
06:42 Elico joined #gluster
06:43 davinder2 joined #gluster
06:46 benjamin_____ joined #gluster
06:49 kdhananjay joined #gluster
06:54 ricky-ticky joined #gluster
07:08 rahulcs joined #gluster
07:09 prasanthp joined #gluster
07:10 prasanthp joined #gluster
07:12 bala1 joined #gluster
07:16 rahulcs joined #gluster
07:18 glusterbot New news from newglusterbugs: [Bug 1077516] [RFE] :- Move the container for changelogs from /var/run to /var/lib/misc <https://bugzilla.redhat.com/show_bug.cgi?id=1077516>
07:22 ngoswami joined #gluster
07:25 ndarshan joined #gluster
07:27 hagarth joined #gluster
07:29 T0aD joined #gluster
07:29 RameshN joined #gluster
07:29 shubhendu joined #gluster
07:32 dusmant joined #gluster
07:37 cjanbanan joined #gluster
07:39 jtux joined #gluster
07:41 dusmant joined #gluster
07:41 slayer192 joined #gluster
07:43 haomaiwa_ joined #gluster
07:48 haomai___ joined #gluster
07:53 rahulcs joined #gluster
07:55 ctria joined #gluster
08:00 kdhananjay joined #gluster
08:07 eseyman joined #gluster
08:08 pea_brain joined #gluster
08:08 rahulcs joined #gluster
08:09 pea_brain left #gluster
08:11 cjanbanan joined #gluster
08:12 hagarth joined #gluster
08:13 kanagaraj joined #gluster
08:19 franc joined #gluster
08:19 franc joined #gluster
08:30 les_ joined #gluster
08:30 les_ left #gluster
08:38 Ark joined #gluster
08:48 hagarth joined #gluster
08:54 Norky joined #gluster
08:54 saurabh joined #gluster
08:56 fsimonce joined #gluster
08:59 jbustos joined #gluster
09:04 liquidat joined #gluster
09:05 doekia joined #gluster
09:05 doekia_ joined #gluster
09:12 Pavid7 joined #gluster
09:12 ricky-ticky1 joined #gluster
09:13 haomaiwang joined #gluster
09:19 gdubreui joined #gluster
09:28 harish_ joined #gluster
09:36 rahulcs joined #gluster
09:40 Oneiroi joined #gluster
09:52 rahulcs joined #gluster
09:55 meghanam_ joined #gluster
09:55 meghanam joined #gluster
09:56 haomaiwang joined #gluster
09:58 vpshastry2 joined #gluster
10:01 Pavid7 joined #gluster
10:04 tokik joined #gluster
10:06 hagarth @seen yuan
10:06 glusterbot hagarth: I have not seen yuan.
10:06 hagarth @seen yuan_
10:06 glusterbot hagarth: I have not seen yuan_.
10:11 ravindran left #gluster
10:14 AndChat|106409 joined #gluster
10:35 nshaikh joined #gluster
10:36 sks joined #gluster
10:42 meghanam_ joined #gluster
10:43 meghanam joined #gluster
10:48 ctria joined #gluster
10:49 hagarth joined #gluster
10:51 glusterbot New news from newglusterbugs: [Bug 1073023] glusterfs mount crash after remove brick, detach peer and termination <https://bugzilla.redhat.com/show_bug.cgi?id=1073023>
11:01 rfortier joined #gluster
11:03 vpshastry1 joined #gluster
11:07 tokik joined #gluster
11:12 tokik_ joined #gluster
11:13 satheesh1 joined #gluster
11:14 tokik joined #gluster
11:15 diegows joined #gluster
11:16 XATRIX joined #gluster
11:16 XATRIX Hi guys, i need your advice
11:17 tokik joined #gluster
11:17 XATRIX I've found an article on the net, about glusterfs performance charts. And i have a question
11:18 XATRIX http://gluster.org/docs/PDF/GlusterFS-IOZone-v3.pdf
11:18 XATRIX Those guys, mean they have 1x 300GB SATAII disk, with max unbuffered reads = ~58MB\s
11:18 XATRIX But the glusterfs shows performance ~700-800MB\s
11:18 XATRIX 58 * 15 bricks
11:19 XATRIX How the hell it is possible to achive 750-800 MB\s with a single drive ?
11:19 ctria joined #gluster
11:20 shubhendu joined #gluster
11:20 pk1 joined #gluster
11:21 RameshN joined #gluster
11:21 glusterbot New news from newglusterbugs: [Bug 1074947] add option to bulld rpm without server <https://bugzilla.redhat.com/show_bug.cgi?id=1074947>
11:21 hagarth joined #gluster
11:23 hybrid512 joined #gluster
11:23 hybrid512 joined #gluster
11:27 ndarshan joined #gluster
11:28 tokik_ joined #gluster
11:30 hybrid512 joined #gluster
11:31 edward1 joined #gluster
11:31 hybrid512 joined #gluster
11:33 ppai joined #gluster
11:36 Ark joined #gluster
11:43 lalatenduM joined #gluster
11:44 sroy joined #gluster
11:45 qdk joined #gluster
11:46 aravindavk joined #gluster
11:49 kanagaraj joined #gluster
11:51 ndarshan joined #gluster
11:52 ira_ joined #gluster
11:54 JediMaster joined #gluster
11:54 JediMaster Hi everyone, I'm trying to get an idea of how Gluster works and if it's suitable for our needs.
11:55 Pavid7 joined #gluster
11:55 JediMaster What we have is two webservers in different datacentres, that have a 10G conenction with <2ms latency, we want to make one webserver the primary machine and our DC will have a floating IP that will failover to the secondary DC should the primary go down
11:55 JediMaster To get this working we need to replicate the filesystem and database between the machines, but only ever one will be live
11:55 JediMaster DB is fine with MySQL multi-master replication
11:57 JediMaster for the filesystem, from what I gather, gluster sits on top of the filesystem, e.g. ext4, and syncs file changes between multiple servers, then the machine read/write independantly to the local ext4 FS oblivious to the replication happening?
11:58 JediMaster From what I can see, we need two gluster-servers replicating to each other, and that's it? What does a gluster-client do?
11:58 Alex JediMaster: Kind of, yeah. Gluster has the concept of 'bricks', areas for data storage. In the deployments I tend to go with, writes/reads happen through a gluster FUSE mounted filesystem, which is actually effectively an aggregate of those N bricks presented through to the consumer
11:58 XATRIX multi-master is a headache...
11:58 Alex so in my case, I have two webservers, each run glusterd and store files on /data, but I mount the glusterfs fs on /shared, and point my webserver to that
11:58 JediMaster XATRIX, not so much if there's only one ever active
11:58 Alex (nb: I may be massively butchering the terminology :))
11:59 JediMaster ok, I see, so gluster does present it's own FS to the machine
11:59 JediMaster and it stores the data in some sort of image files?
12:00 Alex Correct. It happens that in my situation, I can actually read files from the raw brick storage (/data) themselves too
12:00 Alex but in certain configurations (replicated-distributed(?)) you can't do that quite as easily, if you have multiple bricks
12:01 JediMaster Alex, I'm looking at the instructions here: https://www.digitalocean.com/community/articles/how-to-create-a-redundant-storage-pool-using-glusterfs-on-ubuntu-servers doe they look reasonable to setup what I need?
12:01 glusterbot Title: How To Create a Redundant Storage Pool Using GlusterFS on Ubuntu Servers | DigitalOcean (at www.digitalocean.com)
12:02 JediMaster from what I can see the bricks in this case are one on each of the two servers
12:03 Alex I believe so, yes. Although, I believe in your case, you would definitely need to handle writes through the gluster fuse fs
12:03 JediMaster is that what the gluster-client handles?
12:04 Alex Yes
12:04 Alex sudo mount -t glusterfs domain1.com:volume_name path_to_mount_point
12:04 Alex (that bit, specifically)
12:04 XATRIX JediMaster: I was going to use multi-master with double nginx-proxy. 1-2 2-1 But after a while, i found a duplicate entries...
12:04 JediMaster XATRIX, load-balacing or just having a single machine live at once?
12:05 XATRIX No no.. .I have 2 nodes cluster, and wish to double computing performance by spreading the load between
12:05 XATRIX Round-robin
12:05 XATRIX 1 DB, 1 shared-storage
12:06 shapemaker joined #gluster
12:06 JediMaster Right, yeah it's possible, especially if you don't set up offset autoincrement ids and worse if there's high latency
12:06 rahulcs joined #gluster
12:06 JediMaster at least in my case there's effectively only one being written to at any one time
12:06 JediMaster Alex: I see, so I'd need to install glusterfs-client on both servers too?
12:07 JediMaster then mount on both
12:07 XATRIX After a few restarts of a randomly selected nodes, i had troubles with DB replication. My gluster storage was able to handle the sanity while reboots
12:07 Alex In your case, yeah.
12:07 Alex [and in my case :)]
12:07 JediMaster Yeah, ok, I think I'm starting to understand it a bit better
12:07 XATRIX I did autoincrement and i have directly-connected 1Gbit ethernet nodes
12:07 JediMaster Thanks for your help Alex
12:09 XATRIX Guys, what if i didn't setup gluster.vol(s)
12:09 XATRIX How can i configure io caches ?
12:09 XATRIX I have a running configuration already
12:09 XATRIX No idea where it stored.
12:10 pk1 joined #gluster
12:10 purpleidea joined #gluster
12:10 purpleidea joined #gluster
12:12 aravindavk joined #gluster
12:14 JediMaster Alex, how does Gluster deal with disconnections, e.g. if the primary machine went down, the secondary was written to and the primary was bought back up, would it sync the changes back across to the primary?
12:15 kam270_ joined #gluster
12:15 JediMaster also, is that process of reconnecting and re-syncing completely automated?
12:16 Alex That is where my knowledge ends, sadly, JediMaster. My understanding is that Gluster will be aware of the files which are incorrect - but have a look at JoeJulian's excellent blog - some recent posts include http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
12:16 glusterbot Title: GlusterFS Split-Brain Recovery Made Easy (at joejulian.name)
12:16 glusterbot New news from resolvedglusterbugs: [Bug 834729] gluster volume remove-brick defaults to force commit, and causes data loss <https://bugzilla.redhat.com/show_bug.cgi?id=834729>
12:16 itisravi joined #gluster
12:21 a2 joined #gluster
12:21 glusterbot New news from newglusterbugs: [Bug 1077682] [RFE] gluster volume remove-brick defaults to force commit, and causes data loss <https://bugzilla.redhat.com/show_bug.cgi?id=1077682>
12:22 ollivera joined #gluster
12:22 jtux joined #gluster
12:23 kam270_ joined #gluster
12:27 JediMaster Alex, interesting, does 3.4's self-heal on replicate do that automatically though?
12:28 Alex that is what I will tell you when I next get a failure. Do not ask why we didn't go through OAT with this ;-)
12:31 rahulcs joined #gluster
12:31 kam270_ joined #gluster
12:34 Ark joined #gluster
12:36 bala1 joined #gluster
12:40 japuzzo joined #gluster
12:41 pdrakeweb joined #gluster
12:41 RameshN joined #gluster
12:43 kanagaraj joined #gluster
12:44 aixsyd joined #gluster
12:44 aixsyd heya gents
12:44 rahulcs joined #gluster
12:45 aixsyd JoeJulian: Can you shed some light on the volume set primary/secondary storage.owner-uid commands? I can only find limited documentation on Red Hat Storage regarding these commands
12:45 aixsyd I see in those docs specifically the following: "If you are using QEMU/KVM as a hypervisor, ensure to set the brick permissions for QEMU user by setting storage.owner-uid and storage.owner-gid to 107. After you set the required permissions, you must restart the volume. "
12:45 aixsyd ((I am using QEMU/KVM))
12:48 kam270_ joined #gluster
12:50 pdrakeweb joined #gluster
12:51 pk1 joined #gluster
12:52 andreask joined #gluster
12:53 pdrakeweb joined #gluster
12:55 bennyturns joined #gluster
12:56 JediMaster how do you specify the size of the volume you're creating with gluster?
12:56 aixsyd JediMaster: I believe that is determined by the formatted brick sizes, no?
12:57 pk1 left #gluster
12:58 JediMaster ahh, it's the smallest size of the two servers
12:58 JediMaster duh
12:58 aixsyd ^_^
12:59 gmcwhistler joined #gluster
13:03 ndarshan joined #gluster
13:05 rfortier1 joined #gluster
13:05 jag3773 joined #gluster
13:06 aravindavk joined #gluster
13:10 tdasilva joined #gluster
13:13 bala1 joined #gluster
13:14 rahulcs joined #gluster
13:19 coredump joined #gluster
13:21 theron joined #gluster
13:21 monotek joined #gluster
13:21 monotek left #gluster
13:22 monotek joined #gluster
13:27 vpshastry joined #gluster
13:28 robo joined #gluster
13:28 monotek @semiosis
13:28 glusterbot monotek: I do not know about 'semiosis', but I do know about these similar topics: 'semiosis tutorial'
13:28 monotek i jsut added you ubuntu-qemu-glusterfs ppa in trusty but it seems installing qemu prefers universe version instead of yours?
13:28 monotek wanted behaviour?
13:28 aixsyd lol @ glusterbot
13:30 vpshastry left #gluster
13:32 jtux joined #gluster
13:34 benjamin_____ joined #gluster
13:42 monotek @ semiosis
13:42 glusterbot monotek: I do not know about 'semiosis', but I do know about these similar topics: 'semiosis tutorial'
13:42 monotek seems you version is older :-(
13:42 monotek 1.7.0+dfsg-3ubuntu6 vs 1.7.0+dfsg-3ubuntu2semiosis1
13:45 rfortier joined #gluster
13:47 rfortier2 joined #gluster
13:47 Pavid7 joined #gluster
13:49 ccha joined #gluster
13:52 ProT-0-TypE joined #gluster
13:53 robo joined #gluster
13:58 ccha joined #gluster
14:00 lalatenduM_ joined #gluster
14:00 rpowell joined #gluster
14:05 japuzzo joined #gluster
14:05 kaptk2 joined #gluster
14:06 monotek @semiossis
14:06 monotek i pinned you ppa now.
14:06 monotek are there any plans to have automatic updates for this ppa?
14:08 primechuck joined #gluster
14:10 jobewan joined #gluster
14:10 clutchk joined #gluster
14:14 Pavid7 joined #gluster
14:17 davinder joined #gluster
14:21 XATRIX Guys, how can i setup iocache ?
14:22 XATRIX If i didn't make .vol config file for my setup
14:22 XATRIX only from CLI
14:22 B21956 joined #gluster
14:22 XATRIX i set my gluster up
14:25 shylesh joined #gluster
14:26 robo joined #gluster
14:28 sprachgenerator joined #gluster
14:28 seapasulli joined #gluster
14:31 burnalot joined #gluster
14:31 JediMaster is it normal to see very slow performance when copying lots of small files across to a gluster file system?
14:31 burnalot I am having an issue with Large files and quota
14:32 burnalot has anyone ever seen this?
14:32 criticalhammer JediMaster: ive seen it before
14:32 criticalhammer but small file performance is an issue with gluster as a whole
14:32 JediMaster it's taken > 1 hour to copy 37k files totalling around 45MB
14:32 burnalot just moving a file 28 GB file from Linux.rar to Linux2.rar says quota exceeded
14:32 criticalhammer thats not normal
14:32 burnalot the file is 28 GB and I have  50 GB quota
14:33 criticalhammer burnalot: what do your logs say?
14:33 burnalot same thing
14:33 criticalhammer whats your layout?
14:34 burnalot 2014-03-18 14:42:53.023022] W [fuse-bridge.c:1660:fuse_rename_cbk] 0-glusterfs-fuse: 423053229: /bstetler_8768_8678/Linux.rar -> /bstetler_8768_8678/Linux2.rar => -1 (Disk quota exceeded)
14:34 burnalot so basically
14:34 JediMaster criticalhammer, any idea what's causing it? This is purely for test purposes, with two servers in different datacentres, with a 100Mbps link on one side and 1Gbps on the other
14:34 burnalot we have a bluster cluster distributed replicated 4 servers
14:34 burnalot we mount bluster on two front servers
14:34 burnalot give clients access to their own home directory
14:34 JediMaster criticalhammer, 3.5ms latency between them
14:34 burnalot and we mount bluster to /home
14:34 criticalhammer JediMaster: off the top of my head, I can't think of anything
14:35 burnalot grr mac keeps change gluster to bluster!
14:35 burnalot what the hell is bluster lol
14:35 criticalhammer JediMaster: Whats the server load, memory use, disk IO, additional traffic?
14:36 criticalhammer burnalot: I dont know, i dont have much experience with quotas
14:36 burnalot nothing in logs of gluster servers
14:37 JediMaster 3GB memory available, load of 0.16 no other traffic of any sort and disk io looks like around 50K/sec write, shared between two glusterfsd processes
14:37 JediMaster will check on the other server...
14:37 criticalhammer also make sure to check the client end as well
14:38 m0zes joined #gluster
14:38 JediMaster load 0.15, 0.0-0.2% io wait, 1.5GB free, about 50K/sec IO shared between 4 glusterfsd processes
14:38 JediMaster criticalhammer, both servers are servers and clients
14:39 JediMaster both running Ubuntu 13.10 with the 3.4 gluster PPA
14:39 criticalhammer hmmm, my little 4 node desktop setup goes faster than that
14:40 criticalhammer is there anything in the logs?
14:40 monotek imho for smal files nfs is faster than fuse mount
14:40 JediMaster which log should I look in?
14:40 criticalhammer yeah it is monotek but ive never experienced fuse being that slow
14:40 criticalhammer /var/log/glusterfs
14:41 criticalhammer check the brick and volume logs
14:41 JediMaster yeah which file, cli.log, glustershd.log nfs.log storage-pool.log etc.
14:41 jre1234 joined #gluster
14:41 JediMaster nothing since it was setup 2 hours ago in the volume log
14:42 criticalhammer check the brick log
14:42 JediMaster likewise in the brick log
14:42 criticalhammer then the cli.log
14:42 criticalhammer in my test bed i dont have those problems
14:43 JediMaster nothing in the cli log for 2 hours too
14:43 JediMaster likewise on the other server
14:43 criticalhammer try out nfs
14:43 criticalhammer see if there is a difference
14:44 criticalhammer maybe in your environment fuse is a no go
14:44 JediMaster what do you mean by trying out nfs, I noticed an nfs log there, does it actually provide an nfs server interface?
14:45 sroy joined #gluster
14:45 zaitcev joined #gluster
14:45 JediMaster sorry, completely new to gluster as of 2 hours ago =)
14:45 criticalhammer im still in the "learning" boat myself
14:45 criticalhammer gluster has built in nfs server
14:45 JediMaster nice, didn't know that
14:46 criticalhammer read up on how to use it and try it out
14:46 JediMaster anything special required to mount it?
14:46 criticalhammer so far nfs is still better at small files than fuse
14:46 criticalhammer JediMaster: ive never used the built in nfs
14:46 criticalhammer id read up on it
14:48 JediMaster criticalhammer, ok that was dead easy, unmount and change -t glusterfs to -t nfs
14:49 rwheeler joined #gluster
14:51 jre1234 Anyone ever had problems with replace-brick ? I had a replace command lock up a node with 100% cpu usage and I now can't get it to abort the replace-brick even after restarting the affected node. Gluster v 3.3.2
14:52 jre1234 brick log is full of
14:52 tokik joined #gluster
14:52 jre1234 volnamehere-replace-brick: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
14:54 lmickh joined #gluster
14:58 criticalhammer JediMaster: good to know, post up your findings when your done
15:00 Pavid7 joined #gluster
15:01 kam270_ joined #gluster
15:02 benjamin_____ joined #gluster
15:05 daMaestro joined #gluster
15:09 kam270_ joined #gluster
15:11 rpowell1 joined #gluster
15:16 m0zes joined #gluster
15:17 robo joined #gluster
15:18 semiosis monotek: thanks for the feedback.  i'll upload a new release to the PPA tonight
15:19 JediMaster criticalhammer, I seem to be getting really poor performance even with NFS mount, going to try it on two similar spec machines on the same network instead
15:21 jre1234 joined #gluster
15:21 criticalhammer if your systems dont look stressed when this is happening, my guess is that its the network latency of 1gE
15:21 criticalhammer Gigabit ethernet has all kinds of latency caused by its store and forward architecture
15:22 criticalhammer my testing was done with a deticated client hitting 4 bricks
15:22 JediMaster technically it's 1gE over the internet to 100Mbps shared VM, so I have a feeling it's the other side causing the problems, going to test local 1gE instead
15:23 criticalhammer how big are your files?
15:24 JediMaster criticalhammer, I was rsyncing an SVN respoitory into the gluster fs. So many many very small files
15:24 _dist joined #gluster
15:24 criticalhammer yeah it could be the limitation of ethernet
15:25 JediMaster considerably larger files (e.g. 500MB) copied over in < 1 minute
15:25 criticalhammer also if your sharing your network port with other traffic then thats also playing into it
15:25 criticalhammer so nfs + gluster on 1 network port = slower
15:25 _dist I've got a question about healing operations. I know they can take a long time, but how come sometimes they take a very long time to simply resolve to file names --> https://dpaste.de/Uy3k/raw
15:26 _dist I took a node down for 10 min to make a change, and brought it back up. It's been almost 24 hours, and that's still what the volume shows
15:26 JediMaster to make things worse the brick's storage was on the same device as the SVN repo that I was copying from
15:26 JediMaster so that probably added to it
15:26 criticalhammer xD yeah
15:26 criticalhammer poor local disks
15:27 criticalhammer getting a nice thrashing
15:28 daMaestro joined #gluster
15:29 criticalhammer _dist: ive had issues where gluster would do somthing similar
15:29 criticalhammer i never fully figured out why
15:29 chirino joined #gluster
15:30 _dist criticalhammer: I actually suspect that everything _is_ healed, as with running VMs it always shows healing, but that for some reason it just isn't showing filenames on one brick heal info. quantity of "entires" per second is the same on each brick
15:30 criticalhammer JediMaster: make sure to read up on tips and tricks http://www.gluster.org/community/documentation/index.php/HowTo
15:30 glusterbot Title: HowTo - GlusterDocumentation (at www.gluster.org)
15:30 _dist I haven't written my script yet to give "true" heal status for VM volumes
15:31 criticalhammer _dist: could be so. Ive noticed gluster sometimes lags in its reporting of information
15:32 JediMaster semiosis, thanks for remembering ubuntu users with your PPA =)
15:32 criticalhammer but that lag could be from the crappy hardware im currently using as my test bed
15:33 JediMaster criticalhammer, thanks, any idea why it says ext4 is considered harmful for brick storage?
15:33 _dist criticalhammer: If/when I do get to the bottom of it, I'll update here. I don't believe my issue is HW related, I have 20Gbe backend network and 8-disk raid behind behind each with SSD acceleration
15:34 JediMaster that's some kit!
15:34 _dist JediMaster: is that documented anywhere? We are using ext4 for a replicate file server
15:34 criticalhammer im building somthing similar _dist
15:34 semiosis JediMaster: yw
15:34 criticalhammer but using a 2x2 24TB setup on infiniband
15:35 _dist latency might be better than 10gbe, I get around 20-60ms latency on mine
15:35 criticalhammer _dist: yeah please do it would be helpful
15:35 criticalhammer _dist:  when you do a df -h or ls how long does it take to report back?
15:36 criticalhammer that is... if your using some kind of linux
15:36 JediMaster _dist, http://www.gluster.org/community/documentation/index.php/HowTo says to use XFS for gluster bricks
15:36 glusterbot Title: HowTo - GlusterDocumentation (at www.gluster.org)
15:36 _dist criticalhammer: like, no time at all? As fast the console can print I suppose
15:36 criticalhammer thats impressive
15:37 _dist even du is as fast as native
15:37 criticalhammer i currently have huge file structures which im concerned ethernet latency will cause problems
15:37 criticalhammer i was thinking of going 10ge but for 500$ total i can get qdr infiniband
15:38 criticalhammer 500$ more total**
15:38 rahulcs joined #gluster
15:38 _dist honestly our bottleneck so far has always been sata. Yeap, I understand where you're coming from
15:39 _dist the heal status on VMs is my only concern so far, I've nver had problems with regular files. I assume it's because of eager lock for libgfapi, but have not done any research yet
15:42 _dist and when I say problems with VMs I don't mean we've had any impactful event. Just that I feel without paying special attention you can _never_ be sure if a VMs image is healthy, or healing. Especially with this gfid business (which has never lasted more than 6-7 hours before). I think it might be because I didn't force a full heal like normal, but rather let the shd do the work. So I just forced one and we'll see.
15:45 kam270_ joined #gluster
15:51 jag3773 joined #gluster
15:54 kam270_ joined #gluster
15:57 zaitcev joined #gluster
16:08 kam270_ joined #gluster
16:16 nshaikh left #gluster
16:16 kam270_ joined #gluster
16:18 GabrieleV joined #gluster
16:22 FarbrorLeon joined #gluster
16:26 hagarth joined #gluster
16:30 Mo__ joined #gluster
16:31 kam270_ joined #gluster
16:34 Matthaeus joined #gluster
16:39 kam270_ joined #gluster
16:41 kanagaraj joined #gluster
16:46 Matthaeus joined #gluster
16:47 kam270_ joined #gluster
16:48 shubhendu joined #gluster
16:50 primechuck joined #gluster
16:50 robo joined #gluster
16:54 burnalot joined #gluster
16:59 seapasulli left #gluster
17:01 dewey joined #gluster
17:02 jre1234 Is there any way to kill a replace-brick without using the gluster replace-brick command? i.e can you alter the options in /var/lib/glusterd/  and restart gluster
17:03 dewey Here's an odd problem which I do not understand:  Gluster is behaving differently if I run a tool in bulk vs individually.
17:03 kam270_ joined #gluster
17:05 dewey Background:  I did something evil to Gluster.  I have a distributed volume with 2 bricks.  I deleted the volume, reset the file attrs and rm'd .glusterfs, then recreated the volume.  This left some zero size, zero perms on the bricks of files which were distributed.  These are easy to recognize and remove.  No problems.
17:05 wrale joined #gluster
17:06 dewey From the *client*, when I do an e.g. "wc -c *" I get errors on all of these files.  When I then do a "wc -c $SPECIFIC_FILE" it works and the next time I run "wc -c *" I do not get an error on *that* file.
17:06 dewey It makes no difference if I do e.g. "find . -type f | xargs -n 1 wc -c" -- so it doesn't seem to be about running command one file at a time.
17:06 dewey I also get the same behavior on a little script I wrote.
17:07 dewey And so...I'm questioning my sanity.  If anyone can explain why this is not an indicator of my insanity (I have enough others in my collection) I would be most grateful.
17:07 ndevos dewey: sounds like you need to ,,(self heal)
17:07 glusterbot dewey: I do not know about 'self heal', but I do know about these similar topics: 'targeted self heal'
17:08 ndevos ~targeted self heal | dewey
17:08 glusterbot dewey: http://community.gluster.org/a/howto-targeted-self-heal-repairing-less-than-the-whole-volume/
17:08 semiosis ndevos: that's not what you want :)
17:08 ndevos semiosis: no?
17:08 semiosis dewey: 'gluster volume heal $vol full' or something like that
17:08 ndevos ah, well, yes, that'll do it too :)
17:08 dewey Gluster responds "Volume storage is not of type replicate"
17:08 semiosis ndevos: targeted self heal was my poor-man's glustershd before there was shd
17:09 semiosis dewey: ah ha!  try stopping & starting the volume (note this will block clients)
17:09 semiosis i suspect you made changes to bricks while the volume was online
17:09 semiosis may also need to unmount/remount clients
17:10 dewey OK, I can do that, thanks.  Yes, I'm definitely making changes to the bricks while it's running.  I can understand Gluster not liking that.
17:11 dewey What I don't understand is why Gluster likes it if I run a command manually but not if I run it with any kind of bulk operation (wildcard or xargs)
17:11 dewey (btw ndevos -- that link is 404)
17:11 semiosis dewey: i suspect it has to do with caching of metadata in the brick export daemons and/or clients
17:11 semiosis metadata such as "what files are in this directory"
17:12 dewey Yes, this smells like cache coherency.
17:14 ndevos @forget targeted self heal
17:14 glusterbot ndevos: The operation succeeded.
17:14 ndevos @learn targeted self heal as https://web.archive.org/web/20130314122636/http://community.gluster.org/a/howto-targeted-self-heal-repairing-less-than-the-whole-volume/
17:14 glusterbot ndevos: The operation succeeded.
17:15 semiosis nice!
17:15 ndevos dewey: ^ should work for you too now :)
17:15 semiosis glad that article didnt get lost forever, i put a lot of work into it
17:16 criticalhammer oh thats a nice piece of reading ndevos
17:16 dewey Metadata is being retrieved.  File contents are not.  Behavior is different.
17:17 dewey Good article, thanks.
17:20 dewey OK, this is interesting.  when I put in a 1 second delay between invocations in the automatic run case....it works (and heals the files)
17:23 robo joined #gluster
17:24 criticalhammer so ive been playing around with different brick sizes in a volume
17:24 criticalhammer and im getting out of disk errors even though i have room
17:25 kam270_ joined #gluster
17:25 criticalhammer how does the minimum free disk attribute work?
17:27 criticalhammer Does that apply to each brick?
17:29 calum_ joined #gluster
17:32 diegows joined #gluster
17:32 criticalhammer http://thr3ads.net/gluster-users/2011/08/630059-cluster.min-free-disk-separate-for-each-brick
17:32 glusterbot Title: thr3ads.net - Gluster users - cluster.min-free-disk separate for each, brick [Aug 2011] (at thr3ads.net)
17:32 criticalhammer So is this true in version 3.4?
17:35 sputnik13 joined #gluster
17:35 sputnik13 joined #gluster
17:36 FarbrorLeon joined #gluster
17:36 jre1234 left #gluster
17:36 jre1234 joined #gluster
17:42 seapasulli joined #gluster
17:45 _dist fyi for those who were here earlier, forcing a heal full fixed the gfid reporting issue in heal info
17:45 criticalhammer Good to know _dist
17:45 _dist I guess whenever I take a node down/up (until the always healing info problem is fixed, or xml output is available) I'll have to do heal full. Not too big a deal
17:46 criticalhammer how much did it impact your servers?
17:46 criticalhammer how much resources where used?
17:47 kam270_ joined #gluster
17:48 _dist criticalhammer: no impact that I could measure, I honestly think that it was already healed but just reporting gfid instead of filename. I'm running 24 VMs on a 2 node replicate with an average read/write load of 50MBytes all day which definitely isn't stressing gluster. Full heals don't seem to create a bottleneck while in progress
17:48 _dist history shows that cpu usage went up by about 5% during the crawl
17:49 criticalhammer not bad
17:49 criticalhammer my dinky heals suck up about 10 to 20% of resources
17:49 criticalhammer but im running a 4 node setup on 2 core 4 gig 2 disk desktops
17:50 _dist could be a spec difference. My cpus are 12 core e5 xeons *2/server
17:50 _dist so realistically, 5% is several ghz of usage
17:50 criticalhammer oh it most definitely is a spec difference
17:50 criticalhammer im also healing a whole slew of file sizes and file counts
17:50 _dist but, my data is active VMs 3TB at least
17:51 _dist so, so far I'm very happy
17:51 criticalhammer good to hear
17:53 cfeller joined #gluster
17:55 brandonmckenzie joined #gluster
17:58 lpabon joined #gluster
18:06 kam270_ joined #gluster
18:07 FarbrorLeon joined #gluster
18:07 rpowell joined #gluster
18:11 rahulcs joined #gluster
18:16 discretestates joined #gluster
18:17 rahulcs joined #gluster
18:18 discretestates hey all.  anyone I can talk with about GSOC?
18:21 * semiosis points at johnmark
18:24 discretestates Thanks, I don't see him in the room
18:24 discretestates (I'm new to IRC)
18:24 discretestates Ah, he is an admin
18:25 semiosis i couldn't point at him if he wasn't here :)
18:25 semiosis afk, lunch
18:25 semiosis @deop johnmark
18:25 glusterbot semiosis: Error: You don't have the #gluster,op capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
18:26 semiosis @deop johnmark
18:26 semiosis hahahaha
18:26 discretestates I just sent him a private message,  Thanks, semiosis!
18:26 semiosis yw
18:27 robo joined #gluster
18:31 dewey joined #gluster
18:33 rshade98 joined #gluster
18:40 haomaiwa_ joined #gluster
18:51 ccha joined #gluster
18:51 discretestates joined #gluster
19:01 gdubreui joined #gluster
19:12 JonnyNomad joined #gluster
19:15 rshade98 anyone see a time when gluster will allow some clients to connect,then stop accepting connections, the a volume stop/start allows it to accept connections agaion
19:15 rshade98 version 3.4.2
19:16 rshade98 brick info: http://www.fpaste.org/86478/17014813/
19:16 glusterbot Title: #86478 Fedora Project Pastebin (at www.fpaste.org)
19:18 JoeJulian rshade98: 98% of the time I've seen any connection issues it's been a network issue, firewall, routing, etc.
19:20 rshade98 All the boxes are running the same template, iptables is off, and I can netcat the ports
19:22 JoeJulian Check the client log for errors when the problem arises.
19:22 JoeJulian The other possibility that comes to mind based on your description is oom killer killing bricks.
19:25 rshade98 it's saying connection timed out. But the netcat to the same port returns
19:25 semiosis ip address conflict?
19:25 JoeJulian NAT?
19:25 rshade98 and I have a telnet session open
19:26 rshade98 Semiosis all m3.large amazon instances
19:26 semiosis hah, definitely the network then!
19:26 JoeJulian hehe
19:26 semiosis i've seen some freaky sh*t on the ec2 lan
19:26 semiosis i stop/start the instance (to get a new hardware allocation) when i see that kinda thing
19:27 semiosis it's very rare, but in the ~3 years i've been in prod on ec2, seen it a few times
19:27 semiosis slightly more common are instances that can't resolve DNS
19:28 rshade98 you mean the clients or servers?
19:28 semiosis i mean *instances*
19:28 semiosis sometimes you get an ec2 instance with flaky IP or DNS
19:29 rshade98 Yeah, you mean relaunch all of them?
19:29 semiosis also seen one of my elasic IPs get squashed as if it was on a router taking a DoS
19:29 semiosis rshade98: no, just stop & start the one with the network problem
19:30 DV joined #gluster
19:30 B21956 joined #gluster
19:31 rshade98 I have 4 clients
19:31 robo joined #gluster
19:31 rshade98 its very reproducable
19:31 rwheeler joined #gluster
19:32 rshade98 I am gonna try a different instance type. I had very stable m1.large setups. I am wondering if its the m3.arch
19:34 semiosis all my gluster servers are m1.large and have been rock solid
19:35 semiosis rshade98: might want to launch ebs-optimized, while you're at it
19:35 semiosis i haven't done any perf tests though
19:37 rshade98 We have in the past. The data is small. Almost enough to fit in cache and its almost worm
19:40 jmarley joined #gluster
19:40 jmarley joined #gluster
19:41 jag3773 joined #gluster
19:45 seapasulli left #gluster
19:45 FarbrorLeon joined #gluster
19:55 Elico joined #gluster
19:56 voronaam joined #gluster
19:59 voronaam Hi all. I am running into a very odd problem I have no idea how to tackle. I have a distributed (not replicated) volume on two bricks. And there are two folders on it with files that are disappearing. If I restart gluster on the server to which all NFS clients are mounted, all the files are there. I can view them, copy somewhere. But at some point they are gone. If I am trying to access them I get an "No such file or directory" error. I will appreciate
20:00 rwheeler joined #gluster
20:00 semiosis voronaam: your message ended with "I will appreciate"
20:00 semiosis was there more?
20:01 Matthaeus semiosis: he will increase in value.  The message was complete.
20:01 voronaam Just "I will appreciate any ideas" :)
20:01 semiosis Matthaeus: haha
20:03 Matthaeus voronaam: What you're experiencing might be consistent with mixing up the brick mount with the volume mount.
20:04 voronaam Great! That is how I started to test it - self mount from the server - I guess I should not be doing that
20:04 chirino joined #gluster
20:04 voronaam Let me scrape that and redo the test from the client this time
20:04 Matthaeus Self mount from the server is fine, just make sure you're using the volume mount when you test.
20:05 Matthaeus Short of healing a split brain, there's never a good reason to be messing around in the brick mount.
20:05 voronaam I have it mounted to make sure the files are physically intact (and they are). I am only reading from there
20:15 FarbrorLeon joined #gluster
20:19 JoeJulian voronaam: Check your client log for the loss of connection to (a) brick(s). Then look at the brick logs at that timestamp.
20:20 robo joined #gluster
20:21 failshell joined #gluster
20:21 voronaam I narrowed it down a little. It happens after I try this command on any client: "find -type f -print 2> ~/err.log | wc -l" That was my idea on how to touch every file on the volume.
20:22 kam270_ joined #gluster
20:23 rshade98 So Semiosis, m1.large app servers(client) all connect fine
20:23 rshade98 looks like amazon ticket time
20:23 semiosis try on new m3.large as well, i still suspect it was a problem with a particular instance, not a type
20:24 semiosis and good luck with the amazon ticket!  i gave up on their worthless tech support looong ago
20:24 rshade98 there were 4 of them that broke, even if bad instance, thats a high failure rate
20:24 semiosis hmm strange
20:24 semiosis in different AZs?
20:25 rshade98 yes, across a/b/c
20:25 semiosis well ok then
20:29 seapasulli joined #gluster
20:31 MacWinner joined #gluster
20:31 kam270_ joined #gluster
20:33 FarbrorLeon joined #gluster
20:40 rshade98 just fired up 3 more m3.large and they can't connect
20:49 semiosis wow
20:49 uebera|| joined #gluster
20:49 uebera|| joined #gluster
20:51 _dist can someone tell me if me adding a 3rd replicate to my current 2 replicate that holds about 1TB of data is a terrible idea? I've heard tales of volumes being unusable during such things but never witnessed it myself (adding replicates has always been ok for us)
20:52 gmcwhistler joined #gluster
20:54 criticalhammer When I've added a replicate to an existing setup, the volume still works.
20:54 gmcwhistler joined #gluster
20:54 criticalhammer its just slower while servers copy data over
20:55 badone joined #gluster
20:55 _dist criticalhammer: that's been my experience too, so I assume stories I've seen here have been due to bad network config or underpowered disks etc
20:55 criticalhammer theres more gluster administrative functions happening
20:55 criticalhammer along with raw copying of data
20:55 dewey I read an account of excessive CPU load for rebuild and some magic with cgroups which allowed $WORK to continue while rebuild happened (albeit slower).
20:56 criticalhammer yeah ive heard the same thing dewey
20:56 _dist I do like to cast spells
20:56 dewey I'm about to add 34T of bricks tomorrow morning so we'll see how it goes, though I'm pretty satisfied with my CPU and network bandwidth so I suspect I'll be disk limited.
20:56 criticalhammer whats the directory structure like?
20:57 criticalhammer more file system stuff = more hashing and more hardware use
20:57 kam270_ joined #gluster
20:58 dewey For me 178k files -- most tiny, some hundreds of GBs.
20:58 criticalhammer gl
20:58 dewey :-)
20:58 criticalhammer the largest ive done was 500kish with files spanning between 5k to 100 megs
20:59 criticalhammer it chokes on 1ge
20:59 criticalhammer and desktop hardware
20:59 criticalhammer but works
21:01 dewey I've got dual 10G, 48 7.2k RPM drives in RAID50s.  12 cores/32GB on the existing node and 16 cores/64GB on the node that I'm adding.
21:02 rotbeard joined #gluster
21:02 criticalhammer with that kind of hardware, my guess will be the network latency will be the bottle neck
21:02 dewey I am currently doing an rsync to a 2nd system for redundancy until the 2nd node gets in and I see it spike to 8 cores and around 2500 IOps/480MB/s while that's happening.
21:03 criticalhammer yeah thats some beefy numbers
21:03 dewey During rsync I think I'm limited by latency of the receiving disks.  My network sees about 360MB/s.
21:03 _dist dewey my setup is similar, and my bottleneck is definitely sata io, I don't hit the dual 10gb
21:04 _dist same here, 300-400MB/s
21:04 criticalhammer its not so much network throughput, its the 2 to 4 ms latency found in store and forward technologies
21:04 criticalhammer ie ethernet
21:04 dewey Let me check my wio times...
21:05 _dist personally I find gluster _never_ acts as aggreisvely as an rsync push. I can't imagine hitting numbers like that during a replica add
21:05 dewey Definitely looks SATA limited.  I'm getting ~400% I/O wait (i.e. 4 cores waiting) during a sync.
21:05 criticalhammer wow dewey
21:05 dewey Well, that's not a bad thing as I'm expecting sync to take a few days and this still needs to be operational.
21:07 dewey (That CPU total and wio is from my receiving server, which has mounted gluster and is doing a "local" rsync.  My Gluster node stays around 125%).
21:14 rshade98 semiosis, I will let you know what response I get
21:14 semiosis rshade98: please do!
21:14 semiosis thanks
21:18 discretestates joined #gluster
21:18 kam270_ joined #gluster
21:19 y4m4 joined #gluster
21:20 FarbrorLeon joined #gluster
21:21 y4m4 joined #gluster
21:21 coredump joined #gluster
21:25 y4m4 joined #gluster
21:26 kam270_ joined #gluster
21:27 failshell gluster volume geo-replication status --xml <- should that work?
21:30 failshel_ joined #gluster
21:32 robo joined #gluster
21:34 layer3switch joined #gluster
21:34 kam270_ joined #gluster
21:34 semiosis if you have libxml2 installed, then --xml *should* work... however istr a couple commands that just don't support --xml, you may have found one
21:35 semiosis please check & file a bug if there's not one already
21:35 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
21:35 semiosis @later tell failshell if you have libxml2 installed, then --xml *should* work... however istr a couple commands that just don't support --xml, you may have found one. please check & file a bug if there's not one already
21:35 glusterbot semiosis: The operation succeeded.
21:38 Joe630 can you guys remoind what I need to do to change the ping times?
21:38 Joe630 for failover
21:38 Joe630 the 42 second one.
21:38 Joe630 I lost my notes. :/
21:39 Joe630 I found it
21:42 y4m4 joined #gluster
21:43 Joe630 network.ping-timeout
22:02 zerick joined #gluster
22:02 rwheeler joined #gluster
22:15 davinder joined #gluster
22:26 m0zes_ joined #gluster
22:34 theron joined #gluster
22:36 m0zes_ joined #gluster
22:41 rshade98 semiosis. It looks like a jumbo frame issue
22:41 wrale another ovirt+gluster question for anyone who would know.. I'm running the engine-install script.. It wants to know what will be my "default storage type".  Selections are: NFS, FC, ISCSI, POSIXFS.  The default is NFS.  I want my default to be gluster.  What should I choose?
22:41 semiosis rshade98: amazing
22:42 rshade98 incase you run into it: ethtool --offload eth0 tso off
22:42 rshade98 ifconfig eth0 mtu 1500
22:43 semiosis you're effectively disabling jumbo frames?
22:44 m0zes joined #gluster
22:44 rshade98 I guess. I am a little rusty on the tso stuff
22:45 semiosis i have no experience with jumbo frames, dont know what tso is... was just guessing based on mtu
22:45 semiosis anyway, time to go.  glad you figured it out & thanks for the follow up
22:54 m0zes joined #gluster
22:57 robo joined #gluster
22:57 andreask joined #gluster
22:59 m0zes joined #gluster
23:01 badone joined #gluster
23:04 m0zes_ joined #gluster
23:08 robo joined #gluster
23:09 m0zes joined #gluster
23:27 robo joined #gluster
23:28 m0zes_ joined #gluster
23:32 jag3773 joined #gluster
23:35 robo joined #gluster
23:36 seapasulli left #gluster
23:39 rahulcs joined #gluster
23:39 Elico joined #gluster
23:43 robo joined #gluster
23:47 kam270_ joined #gluster
23:50 m0zes joined #gluster
23:55 kam270_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary