Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 mjrosenb but that call to mount shouldn't do anything over the network, right?
00:04 JoeJulian The fuse mount will connect to the glusterfs client which will attempt to get the configuration over the network.
00:04 mjrosenb ahh, so it'll connect to the client, which called it?
00:05 JoeJulian right
00:05 mjrosenb also, it seemed to be calling umount at the same time.
00:05 mjrosenb so, can I restart glusterd without bringing down the whole brick?
00:06 JoeJulian Yes.*
00:06 StarBeast joined #gluster
00:07 JoeJulian * possibly coming soon to an rpm near you, using service/initctl to stop or restart glusterd will restart everything. We'll have to start killing glusterd by hand if we want to restart just that.
00:10 StarBeast Hi. I experiencing the trouble with 2 of 10 bricks in distributed replica with factor 2. Load on 2 servers holds these bricks is huge and I see this error messages in glustershd.log inode link failed on the inode (00000000-0000-0000-0000-000000000000)
00:10 mjrosenb in the past, i've had issues where clients get rather confused when a brick goes away.
00:10 StarBeast Any ideas what is wrong?
00:11 _Scotty joined #gluster
00:12 _Scotty Good evening, all
00:14 _Scotty Was an admin manual released for version 3.4?
00:15 _Scotty I'm trying to find updated documentation to determine how to deploy a 20 server distributed replicated cluster.  I'm also looking for extra tips on small file performance.
00:27 dbruhn there is the 3.3.0 manual, and the wiki has most of the new information
00:28 dbruhn how much data are you looking at for the small file performance?
00:28 dbruhn and what kind of data
00:31 bivak joined #gluster
00:32 _Scotty The wiki on gluster.org specifically?  Just checking.
00:32 dbruhn yep
00:34 _Scotty Thanks.  Well, the data question is interesting.  My users work on speech and text.  There's around 40TB of that kind of corpora, some of it larger files ~50MB each and some of it is very small <1k.  Total is about 500 million files.  I'm using two NFS servers with 80 drives each and they are just dog slow doing even simple directory listings.
00:35 _Scotty I've thought about doubling or tripling the number of spindles just to help with IOPS, but you can only scale a server so far before the I/O bus is the bottleneck.
00:35 _Scotty And besides, it's just staving off the inevitable.
00:36 dbruhn gah, worst of all worlds.
00:36 _Scotty Yeah, it really stinks.  Commercial vendors have solutions but they are 10x my yearly budget.
00:36 dbruhn the problem with gluster is that any operation that stats a file causes it to self heal
00:36 dbruhn which is a check against it's replica, if you are using replication.
00:37 _Scotty so doing something like "ls -l" triggers a self heal?
00:37 dbruhn yep
00:37 _Scotty darn
00:37 dbruhn on every file in the directory
00:37 dbruhn I have been told that NFS dramatically improves this landscape
00:37 dbruhn but I am using the Gluster Fuse client myself
00:38 _Scotty Well, if you use NFS, you have to put it behind some kind of load balancer, yeah?  Otherwise one server would get all the requests
00:38 dbruhn I will say that network, and spindles all contribute
00:38 dbruhn yep
00:38 _Scotty Network's not an issue, it's all 10GbE
00:38 dbruhn well... I could say there are even better things to resolve the issues.
00:39 dbruhn how many clients are connecting?
00:39 _Scotty Around 250.
00:39 dbruhn all on 10GB?
00:39 _Scotty Yes
00:39 dbruhn nice
00:39 _Scotty A mini HPC cluster.
00:40 dbruhn someone else can probably speak to NFS performance with replication
00:40 _Scotty End users connect through gigabit to a few edge servers, so I'm not as worried about bandwidth there
00:40 dbruhn i've combated the issue by using 40GB InfiniBand
00:40 dbruhn agh, well here is the issue
00:40 dbruhn the client actually connects to all of the servers in the cluster
00:40 _Scotty They are primarily going to care if it takes longer than it does now to do directory listings, compile code, things like that.  The storage is used for both general purpose computing and HPC I/O
00:41 dbruhn so when the client requests a file, it actually hits all of the servers hosting bricks that store the data and pulls it back from the fastest responders between the replicant servers
00:41 _Scotty Ah.
00:41 _Scotty Well I can put the edge servers on 10Gb too if that would help
00:42 dbruhn is there any reason you have the HPC and user data on the same storage?
00:43 _Scotty Just how it has grown over time.  It's mostly user created code running grid jobs.  They're going to want to traverse the output and input files, manipulate them, re-run the jobs, etc.
00:43 dbruhn Also, something to take into consideration is the spindles, faster to respond means faster to deliver
00:43 dbruhn so 15k sas/ sad will help you a bunch
00:43 dbruhn s /sad/ssd
00:44 dbruhn replication does help read speeds though, so it gives you a much faster response when pulling the data and reading it
00:44 _Scotty Agreed.  The NFS servers have a lot of RAM, SSDs, and are using ZFS
00:45 _Scotty Write speed only matters when they compile code.  I don't care as much about grid jobs writing out, outside of overall job execution time
00:46 _Scotty So for instance if it takes 5 minutes to compile on local disk and an hour to compile on Gluster, they'd care about that a great deal.
00:47 dbruhn totally understood, throughput is not really the issue the system will scale as you add resources
00:48 dbruhn the nice thing about your instance is Gluster does a great job on performing the more clients you throw at it
00:49 _Scotty Yeah, around a year ago I compared Gluster and other systems like OrangeFS and they were similar in performance.  I haven't tested 3.4 yet though.
00:50 mjrosenb orangefs sounds new.
00:50 dbruhn I am still on 3.3.2 because I am bound by RDMA not working in 3.4.1 yet. Sounds like there have been some nice performance increases in 3.4.1
00:51 _Scotty OrangeFS works pretty well up to the point of having over a few hundred million files, after that performance tends to tank, even with distributed metadata.
00:51 * mjrosenb is still on 3.3.0 because I am a lazy person and haven't ported my patches to 3.4
00:51 mjrosenb i'm also on an un-optimized build because once again, laziness.
00:52 dbruhn if it makes you feel any better I am about 50 million files in 20TB on one system, and 1 billion files on a 39th system
00:52 _Scotty The main problem I've found with any distributed filesystem (minus Gluster, perhaps) is that the concept of "large filesystem" seems to mean "I have a few thousand huge files totaling hundreds of TB" - they never consider that someone may have hundreds of TB in millions of files
00:52 Skaag joined #gluster
00:52 _Scotty Well with those 50m files, what's your performance like?
00:53 _Scotty and the billion file system?
00:53 _Scotty and how many servers/bricks total?
00:53 dbruhn the 50m system is 10 servers running 15k sas on 40GB IB infiniband over RDMA
00:53 _Scotty One of the vendors actually asked me why anyone would ever do that in HPC?  Well gee, I dunno, you generate data.
00:54 _Scotty Ah.
00:54 dbruhn I can open and read the header of every single file in the system in about 4 days
00:54 _Scotty 4 days?!
00:54 dbruhn Each server is a client as well, which my application stack accesses the data through
00:55 dbruhn my bigger system, is 7200 RPM SATA, I run 3TB drives, QDR Infiniband as well, I only allow the drives to fill to 50%, and it takes about 3 weeks for the same operations as mentioned before
00:55 _Scotty I see.  4 days seems a little, long, perhaps for 10 servers in a cluster.
00:56 dbruhn that check is reading the header and checking the consistency against a DB
00:56 _Scotty Ah.
00:56 dbruhn the servers are all 2u 12 drive servers with hardware raid
00:57 dbruhn i usually break the server into two logical drives that I treat as bricks
00:57 dbruhn running raid5 on each brick
00:57 _Scotty So something like a Dell R720 you mean?
00:58 dbruhn yep I have a stack of r720xd's
00:58 _Scotty Ah.
00:58 dbruhn and then another stack of DL180's or something like that
00:59 _Scotty What type of gluster volumes did you create?
00:59 dbruhn if you have priority processing that you can keep close, Infiniband's latency characteristics are way better than 10gbe
00:59 dbruhn dht - replica 2
01:00 _Scotty Well, latency is reasonable for 10Gb (DAC, not base T) and at least for us was significantly cheaper than IB
01:00 dbruhn i found QDR IB was about the same price as 10GB
01:00 DV_ joined #gluster
01:02 dbruhn the reason I mention the latency is it dramatically reduces the time it takes for those stat based operations
01:02 _Scotty To digress quickly, what is your average latency between IB nodes?  Our implementation of 10Gb is around .05ms
01:03 dbruhn I am not running IP, so it would take me a bit to take a look, brb
01:04 dbruhn gotta log in from another computer to check
01:04 _Scotty thanks!
01:07 dbruhn I am at ~0.1 but I am also in my peak usage time of the day
01:08 _Scotty 0.1 ms?
01:08 dbruhn I have about 140 processes running on that system right now
01:08 dbruhn yep
01:08 _Scotty how many clients?
01:08 dbruhn 10
01:08 mjrosenb dbruhn: that seems much faster than it takes for me to stat every file in my relatively small directory.
01:08 _Scotty That's pretty reasonable
01:09 dbruhn yeah not terrible
01:10 dbruhn I will say though, if I traverse the file system and open a directory with a couple hundred files in it, it still takes a while for it to load
01:10 dbruhn and can be annoying from that perspective
01:11 mjrosenb 50 million / 4 days => 144 per second.
01:11 mjrosenb https://gist.github.com/7242997
01:11 glusterbot Title: xcut (at gist.github.com)
01:11 dbruhn mjrosenb, the other caveat to that measurement is that those processing jobs are interrupted that run those processes by client side requests
01:12 mjrosenb that is 35 per second, to just stat the files.
01:12 dbruhn which happens every day at some point
01:15 _Scotty When you say it takes a while to load, how long are we talking?
01:16 _Scotty Like mrjosenb's example?  55 seconds?
01:17 dbruhn let me find a directory with a lot of files and run the same command he used
01:18 StarBeast Hello all. Quick question. I have gluster distributed replica with factor 2. heal info shows me that gluster heals one 20G file. It is doing it for about an hour already. I checked this file in exports directories of both replica servers, size exactly the same. What if I delete one of these files? will gluster reheal it?
01:18 dbruhn http://fpaste.org/50655/38318230/
01:18 glusterbot Title: #50655 Fedora Project Pastebin (at fpaste.org)
01:20 _Scotty dbruhn: that timing at 1.8 seconds looks perfectly acceptable to me.
01:20 mjrosenb dbruhn: that is an order of magnitude faster than mine.
01:20 dbruhn yeah, not terrible, but it cached it first
01:20 mjrosenb although I just ran the command again, and it cut down to 20 seconds.
01:20 DV_ joined #gluster
01:21 _Scotty drbruhn: the IO cache translator?
01:37 _Scotty Well, thanks everyone.  I'll try and jump back on tomorrow.  Cheers.
01:39 sgowda joined #gluster
01:41 gluster-favorite joined #gluster
01:42 asias joined #gluster
01:45 dbruhn mjrosenb, yeah the second pass is always faster
01:46 dbruhn here is the second pass
01:46 dbruhn real0m11.401s
01:46 dbruhn user0m0.009s
01:46 dbruhn sys0m0.020s
01:46 dbruhn [root@ENTSNV04009EP 1]#
01:46 dbruhn after the cache has cleared
01:52 aixsyd_ joined #gluster
01:53 aixsyd JoeJulian: you around, bromosapien? :P
01:54 aixsyd why in gods name would you implement GlusterFS as a storage spot in Proxmox without the ability for the FUSE Client to see more than one server? I'm at a loss.
02:36 satheesh1 joined #gluster
02:39 johnbot11 joined #gluster
02:42 johnbot11 joined #gluster
02:44 vpshastry joined #gluster
03:08 vpshastry left #gluster
03:12 satheesh joined #gluster
03:16 nueces joined #gluster
03:19 shubhendu joined #gluster
03:20 nueces joined #gluster
03:24 bharata-rao joined #gluster
03:26 RameshN joined #gluster
03:28 dusmant joined #gluster
03:31 satheesh1 joined #gluster
03:32 kshlm joined #gluster
03:52 satheesh joined #gluster
03:57 nueces joined #gluster
03:59 sgowda joined #gluster
04:14 satheesh joined #gluster
04:21 kopke joined #gluster
04:25 masterzen joined #gluster
04:29 mohankumar joined #gluster
04:40 ppai joined #gluster
04:45 vpshastry joined #gluster
04:45 lalatenduM joined #gluster
04:47 kopke joined #gluster
04:54 psharma joined #gluster
04:57 kanagaraj joined #gluster
04:58 sgowda joined #gluster
05:01 kopke joined #gluster
05:01 sgowda joined #gluster
05:06 zerick joined #gluster
05:12 hagarth joined #gluster
05:12 satheesh joined #gluster
05:14 marcoceppi joined #gluster
05:14 marcoceppi joined #gluster
05:14 kopke joined #gluster
05:23 shylesh joined #gluster
05:35 nshaikh joined #gluster
05:37 raghu joined #gluster
05:39 DV_ joined #gluster
05:42 satheesh joined #gluster
05:42 aravindavk joined #gluster
05:49 satheesh joined #gluster
05:51 bulde joined #gluster
05:53 rjoseph joined #gluster
05:58 CheRi joined #gluster
06:00 kopke joined #gluster
06:06 satheesh1 joined #gluster
06:12 yosafbridge joined #gluster
06:12 jfield joined #gluster
06:15 glusterbot New news from newglusterbugs: [Bug 990028] enable gfid to path conversion <http://goo.gl/1HwiQc> || [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
06:16 Amanda joined #gluster
06:16 JonathanD joined #gluster
06:18 kopke joined #gluster
06:21 ababu joined #gluster
06:27 mohankumar joined #gluster
06:34 ajha joined #gluster
06:39 kopke joined #gluster
06:40 sgowda joined #gluster
06:42 dusmant joined #gluster
06:42 harish_ joined #gluster
06:42 ricky-ticky joined #gluster
06:42 vimal joined #gluster
06:43 psharma_ joined #gluster
06:45 rjoseph joined #gluster
07:10 Dga joined #gluster
07:12 ngoswami joined #gluster
07:13 mohankumar joined #gluster
07:25 jtux joined #gluster
07:28 hateya joined #gluster
07:45 rjoseph joined #gluster
07:58 rgustafs joined #gluster
08:00 andreask joined #gluster
08:01 ntt_ joined #gluster
08:02 franc joined #gluster
08:02 ntt_ Hi. Is there a way to list connected clients for a volume?
08:03 eseyman joined #gluster
08:05 ekuric joined #gluster
08:07 harish_ joined #gluster
08:07 ctria joined #gluster
08:09 samppah ntt_: gluster volume status volName clients
08:12 ntt_ samppah: thank you. i was using the wrong syntax
08:17 mbukatov joined #gluster
08:19 keytab joined #gluster
08:28 asias joined #gluster
08:32 calum_ joined #gluster
08:34 satheesh1 joined #gluster
08:35 kaushal_ joined #gluster
08:37 shylesh joined #gluster
08:37 rastar joined #gluster
08:54 mgebbe__ joined #gluster
08:59 satheesh joined #gluster
09:03 hngkr_ joined #gluster
09:05 dusmant joined #gluster
09:10 nasso joined #gluster
09:21 DV_ joined #gluster
09:22 DV_ joined #gluster
09:23 DV_ joined #gluster
09:27 DV_ joined #gluster
09:28 DV__ joined #gluster
09:33 kaushal_ joined #gluster
09:34 vpshastry joined #gluster
09:49 satheesh joined #gluster
09:52 vpshastry joined #gluster
09:56 harish_ joined #gluster
10:00 lalatenduM joined #gluster
10:05 itisravi joined #gluster
10:05 bulde joined #gluster
10:12 ninkotech joined #gluster
10:16 kanagaraj joined #gluster
10:21 kanagaraj joined #gluster
10:36 kaushal_ joined #gluster
10:46 ntt_ I have a constant error in my logs: no active sinks for performing self-heal on file. Someone can help me? Why this happens?
10:46 ntt_ my config= replica 2. My client use native glusterfs mount
10:51 psharma joined #gluster
11:08 dusmant joined #gluster
11:10 RedShift joined #gluster
11:12 chirino joined #gluster
11:17 ndarshan joined #gluster
11:18 shubhendu joined #gluster
11:21 RameshN joined #gluster
11:21 CheRi joined #gluster
11:21 ppai joined #gluster
11:23 aravindavk joined #gluster
11:23 harish_ joined #gluster
11:24 ababu joined #gluster
11:24 hagarth joined #gluster
11:28 ntt_ I think my problems are related to mount with gluster native client. Someone can help me with a replica = 2, xfs filesystem? I have a lot of posix: "write failed: offset 65536, Invalid argument"
11:38 andreask joined #gluster
11:52 andreask joined #gluster
11:54 RameshN joined #gluster
11:57 shubhendu joined #gluster
11:58 ninkotech__ joined #gluster
11:58 ninkotech joined #gluster
11:59 yosafbridge joined #gluster
12:15 P0w3r3d joined #gluster
12:19 ndarshan joined #gluster
12:24 ricky-ticky joined #gluster
12:29 andreask joined #gluster
12:30 ira joined #gluster
12:31 rcheleguini joined #gluster
12:39 partner umm perhaps a stupid faq question but it seems on 3.4.1 debian package changelog says "Move /var/log/glusterfs directory creation completely to the glusterfs-server package." and as i am not installing server to the client side i get errors when trying to mount the volume..
12:39 ninkotech joined #gluster
12:40 partner ERROR: failed to create logfile "/var/log/glusterfs/myvol.log" (No such file or directory)
12:40 partner ERROR: failed to open logfile /var/log/glusterfs/myvol.log
12:40 partner best part being the last line saying: Mount failed. Please check the log file for more details.
12:41 partner i guess i am now supposed to install the server too to the client? shouldn't the client package then depend on the server?
12:42 aixsyd_ joined #gluster
12:42 kkeithley you should be able to install just the client, without the server. Isn't there a -common dpkg that -client depends on? Seems to me that creating /var/log/glusterfs should be in -common
12:42 B21956 joined #gluster
12:42 CheRi joined #gluster
12:42 partner there is a client and common package installed which is enough for older installations
12:43 partner thought that change is ancient, its possible i have some other issue here as the change was introduced already 3.3.0
12:43 ricky-ticky joined #gluster
12:44 kkeithley I wonder if that's copied from semiosis' ubuntu dkpgs.
12:44 partner but it happens with the official packages too, i just heard from a colleague he couldn't mount without installing server to the client-box
12:45 partner obviously mkdir /var/log/glusterfs fixed the problem but perhaps all the packages that attempt to write there and not depend on the one actually creating it should do mkdir -P /var/log/glusterfs just in case
12:48 partner just wondering/fyi
12:48 partner i thought there was something wrong with my newly build 3.4.1 squeeze package but it turned out to be more wider case
12:48 kkeithley which official packages are you referring to? E.g. in the Fedora and EPEL rpms /var/log/glusters is created by the glusterfs (i.e. common) RPM.
12:49 partner actually at least the official 3.3.2 client does list those as package content
12:49 partner by official i here refer to download.gluster.org provided packages
12:50 kkeithley dpkgs? rpms?
12:50 partner 3.4.0 doesn't nor 3.4.1, maybe it was forgotten by accident
12:50 partner i said debian ealier
12:50 partner so deb yes
12:51 bennyturns joined #gluster
12:55 kkeithley Well, different people produce the Debian dpkgs than the Ubuntu dpkgs, etc. "Official" packages for debian doesn't really mean a whole lot. The "Official" rpms don't have this problem. I suggest reporting this bug to the person who produced the Debian dpkgs.
12:55 partner yeah, he is here, hence i said about it here
12:55 partner i didn't know about that so i didn't patch my build either, could have otherwise provided some patch for it but lets discuss it first
13:01 ricky-ticky joined #gluster
13:03 Kodiak1 So I stood up a 2 node Gluster test instance last night to start getting a feel for it in advance of Openstack work I'll be doing.  I noticed that though if I fail one of the two nodes during a large file transfer that the copy into gluster hangs for 30-50s.  I also noticed that if I bring the 2nd node back up, when it joins Gluster again, the copy into gluster again hangs for 30-50s.  I didn't expect this behavior - is it normal?  Thanks!
13:04 shubhendu joined #gluster
13:10 ricky-ticky joined #gluster
13:11 kkeithley Kodiak1: that's normal. The default TCP timeout is 42 sec. You can configure a shorter timeout. You won't see that happen, e.g. if you stop the volume normally.
13:11 andreask joined #gluster
13:13 Kodiak1 Interesting - thanks!  I'll note that for scheduled maintenance.  Right now I'm looking at how things respond when hosts fall over (a big problem w/ OpenAFS sometimes)
13:16 ricky-ticky joined #gluster
13:19 plarsen joined #gluster
13:22 ricky-ticky joined #gluster
13:24 zerick joined #gluster
13:26 andreask joined #gluster
13:27 ricky-ticky joined #gluster
13:28 rwheeler_ joined #gluster
13:29 aixsyd kkeithley:  how do you configure a shorter timeout?
13:34 kkeithley gluster volume $volname set network.ping-timeout 30
13:34 kkeithley er, gluster volume set $volname network.ping-timeout $timeout
13:35 ricky-ticky joined #gluster
13:37 JoeJulian ~ ping-timeout | aixsyd
13:37 glusterbot aixsyd: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
13:37 chirino joined #gluster
13:39 pkoro joined #gluster
13:40 aixsyd JoeJulian: awesome.
13:40 ricky-ticky joined #gluster
13:40 aixsyd JoeJulian: I was reading (i believe) your post about rrDNA w/ GlusterFS - i assume theres no way to tell DNS when a node has failed and tell it to switch the secondary IP to its primary, correct?
13:40 aixsyd *rrDNS
13:41 JoeJulian aixsyd: Are you referring to connecting via nfs?
13:41 kkeithley My DNA is already twisted
13:41 JoeJulian hehe
13:41 aixsyd kkeithley:  :P
13:41 yongtaof joined #gluster
13:41 JoeJulian kkeithley: "I think it is unfair to ask for support..." to ME! :D
13:41 aixsyd JoeJulian: negaive - when connecting via FUSE
13:42 kkeithley huh?
13:42 aixsyd *negative - wow I can type this morning
13:43 JoeJulian aixsyd: Then there's no reason to do that. The whole concept is that it'll work through the list of A records until it connects (provided you set enough fetch-attempts).
13:44 JoeJulian kkeithley: That damned systemd bug. His last comment started with "I think it is unfair to ask for support..." directed at me. Like I ask for support. :D
13:44 aixsyd JoeJulian: I attempted to connect Proxmox to a "gluster" A name with both my gluster1 and gluster2 IP's as IP's for it - and when I take the first IP offline, Proxmox's storage hangs, the VM hangs and hates me greatly
13:45 JoeJulian aixsyd: add fetch-attempts=2 to your fstab options.
13:45 kkeithley the glusterfsd start/stop bug?
13:45 JoeJulian yeah
13:45 JoeJulian bug 1022542
13:45 glusterbot Bug http://goo.gl/8UhTjA unspecified, unspecified, ---, ndevos, ON_QA , glusterfsd stop command does not stop bricks
13:45 kaptk2 joined #gluster
13:45 aixsyd JoeJulian: any reason to stay with two and not more?
13:46 JoeJulian You only referenced having two servers.
13:46 dbruhn is there a "maintenance mode" that you can put a server/brick in that will finish all writes and then disconnect?
13:46 aixsyd I gotcha.
13:46 dbruhn a safe shutdown of a single brick so to speak
13:47 JoeJulian dbruhn: I'm /mostly/ sure that a SIGTERM does that.
13:47 aixsyd JoeJulian: due to how proxmox shares storage, there isnt actually a central fstab to edit to make that change. :'(
13:48 aixsyd unless I manually mount it on every server - and then proxmox tends to show weird things - like a 50gb replicated volume as if it was a distributed 100gb volume...
13:48 JoeJulian That does sound rather sucky.
13:49 aixsyd it does - their rollout of their GlusterFS support has been rather sucky, at least at the GUI - and you really do need the GUI for Proxmox - on the whole
13:49 JoeJulian The only other option I can think of would be to use heartbeat to manage adding and removing A records. :(
13:49 dbruhn How does one put a feature request in? just a bug report?
13:49 JoeJulian dbruhn: yes
13:50 dbruhn kk
13:50 JoeJulian typically the description starts with "enhancement:"
13:50 dbruhn roger
13:50 dbruhn been working through a couple thousand split-brain's
13:50 dbruhn post all of my previous noise
13:51 JoeJulian ugh... probably going to request the same enhancement I've been requesting since 3.0...
13:51 dbruhn what's that?
13:51 ricky-ticky joined #gluster
13:51 JoeJulian Some client-side way of selecting the valid copy.
13:52 kkeithley heh, his bedside manner leaves a lot to be desired. OTOH I wasn't aware that systemd could do updates during system startup. (Someday I should read the systemd docs.)  Is that something that would be interesting?
13:52 bennyturns joined #gluster
13:53 dbruhn the problem is the only thing for me accessing my storage is my application stack, so I was going to put in a feature request for a pre defined selection based off of age,size, etc
13:53 ricky-ticky joined #gluster
13:53 JoeJulian kkeithley: I can't (off the top of my head) think of anything inherently bad about doing the update at startup.
13:54 dbruhn you haven't had to deal with windows since windows 7 then....
13:54 JoeJulian kkeithley: ... it's a little ironic. His blog is titled, "The Humble Geek". :D
13:54 dbruhn every damn time the machine reboots the autoupdates happen
13:54 harish_ joined #gluster
13:55 JoeJulian dbruhn: I know.... every time I reboot into windows to play some game, I have to wait for the damned updates... and why does "15%" take 90% of the time...
13:56 dbruhn I have a single windows machine in my recording studio that doesn't get used very often so I actually power it down, and ever time I turn it off or on I have to wait an extra 10/15 min for updates, so annoying.
13:58 franc joined #gluster
13:58 franc joined #gluster
13:59 onny joined #gluster
14:02 ricky-ticky joined #gluster
14:03 kkeithley I'm kinda hoping that the yum update merely downloads and schedules it to be applied at the next reboot.
14:03 lalatenduM joined #gluster
14:09 JoeJulian kkeithley: packagekit...
14:10 JoeJulian kkeithley: I think that refers to http://fedoraproject.org/wiki/​Features/OfflineSystemUpdates
14:10 glusterbot <http://goo.gl/Kd2VJE> (at fedoraproject.org)
14:12 JoeJulian Which is all fine and dandy except who needs gnome on a storage server?
14:12 kkeithley ugh
14:14 JoeJulian kkeithley: otoh, what about setting a sysconfig value to affect whether or not glusterfsd is killed on a restart?
14:16 Kodiak1 kkeithley:  We wrote something simple that does what you're looking for.  We call it Updater.  It works in tandem with our own scheduler called Rebooter.  (We use very fancy names for our in-house apps)
14:17 Kodiak1 Best part is that it only updates on boot if it's in it's window, so boot-ups from a recovered hang, accidental reboot, or midday desperation reboot all come back up quick
14:17 wushudoin joined #gluster
14:19 Kodiak1 At home I prefer Scientific Linux's update automation options.
14:19 kkeithley My quick read suggests that the gnome part is the additional choice of "install updates and restart"   A sysconfig value would undoubtedly work too, might be more than sufficient. I expect we will want the same behaviour for nfs-ganesha too.
14:20 kkeithley Kodiak1: agreed
14:22 JoeJulian kkeithley: Also, in case you're unaware, pranithk has been working on a feature that should solve my biggest concerns with bug 872601
14:22 glusterbot Bug http://goo.gl/sZgPw unspecified, unspecified, ---, pkarampu, ASSIGNED , split-brain caused by %preun% script if server rpm is upgraded during self-heal
14:24 JoeJulian ... I think... now I have to see what this actually does...
14:24 kkeithley I was dimly aware of it, not following it very closely
14:24 kkeithley too many irons in the fire
14:24 JoeJulian I'm sure.
14:31 Kodiak1 I'd like to know how people are setting up their small scale (perhaps at home) Gluster test instances.  I'd like to use both Gluster and Openstack at home with some minimal degree of usability performance wise.  I've got Openstack cloud control and compute sorted and everything is on switched GbE.  I'm thinking for Gluster I can get away with a single storage hypervisor running KVM, and 2 Gluster VMs each with a dedicated GbE and 2 disk SATA RAID0
14:32 edward1 joined #gluster
14:32 JoeJulian The potential problem I see with running the gluster server in a VM under openstack is that openstack will spin up the VM on whichever compute node it likes (based on a set of rules, of course). So there's no guarantee that your VM will be on the compute node that has the attached storage.
14:33 hngkr_ joined #gluster
14:35 Kodiak1 Oh sorry I wasn't clear - this'll be on a dedicated storage hypervisor outside of my other physical hosts that power openstack.  It's going to be an AMD quad w/ 16GB RAM, 3 GbE (1 onboard, one 2 port HP HBA), and 4 SATA disks divided into 2 RAID0 drives (with a separate drive for hypervisor OS), so each Gluster VM will get a dedicated GbE and a dedicated RAID0 drive.
14:36 Kodiak1 I tried a similar setup at home last night on my main hypervisor with shared disks and got what I expected, which was pretty poor transfer rates.  That has obvious causes rooted in the guests, host, gluster virtio drives, and transferred files all residing on a 2 disk RAID1
14:37 failshell joined #gluster
14:39 JoeJulian The way I do what it sounds like you're asking about is that I have GlusterFS on bare metal. My OpenStack VMs are 6GB operating system images. I mount the vmimage volume on /var/lib/nova/instances so it will support live migration. Finally, I mount the volume that hosts the data needed for that VM from within that VM, ie "glusterfs:wiki1 /mnt/gluster/wiki1 glusterfs _netdev,fetch-attempts=3,use-readdirp=on 0 0" in fstab.
14:39 chirino joined #gluster
14:40 Kodiak1 So you've got a couple Gluster nodes on bare metal?
14:40 JoeJulian I need to look at how to modify the kvm templates to use the native api...
14:40 Guest54292 joined #gluster
14:41 JoeJulian I have three bare metal GlusterFS servers with 15 replica 3 volumes (4 drvies - one brick per drive per volume)
14:43 Kodiak1 Dang is that at home?  I bet that's a warm room!  My primary workstation, cloud controller, Stack hypervisor, core-services host, and routing & switching gear make my root too hot already
14:44 JoeJulian No, that's at the office. We have a pretty small setup for the tools I use.
14:46 daMaestro joined #gluster
14:46 Kodiak1 Nice.  I'm waiting for my gear to be installed in our datacenter, then I can do this on real gear.
14:47 bugs_ joined #gluster
14:48 JoeJulian At home I only have 4 servers. I have a wheeled rackmount cabinet that holds those, my cisco router, and a cheap GigE switch that I've enclosed in a air-conditioned/vented closet (hot/cold isle style).
14:52 Kodiak1 Yea I might be migrating my stuff down to the garage; it's getting out of control with the advent of all this 'cloud' stuff at work.
14:53 JoeJulian Be careful with the garage. They're commonly uninsulated and can be even worse for your equipment if you're not careful.
14:57 dbruhn I have an old HP rack down in the basement for test stuff at home, right now it has a mix of 8 servers in it, but all are powered down but one vm server and a switch
14:57 dbruhn It acts more of equipment storage than anything
14:57 dbruhn I have a pile of netapp sitting in it and an old g5 and xraid too
14:58 JoeJulian hehe
14:58 dbruhn I need to clean out
14:59 dbruhn I used to use a bunch of it for stuff, but I threw usb hard disk on my router, for backup, and moved my media file storage to a laptop hooked up to the tv, and haven't looked back
14:59 JoeJulian I need to pick up three or four (used?) machines for running cluster tests.
14:59 dbruhn to bad geeks.com went out of business, I used to pick up cheap test hardware from there all the time.
15:00 dbruhn I have a grey market guy I can refer you to, he can find you pretty much anything you are looking for if you want his contact
15:00 lpabon joined #gluster
15:01 dbruhn I picked up 10 hp servers from him for like 2k, with 15k sas drives, 2x 4 core i7 xeon proc's and 32 gb ram each
15:01 dbruhn for some virtualization testing I am doing
15:03 Kodiak1 JoeJulian:  Agreed on the environmental aspect
15:05 Kodiak1 I don't know how hefty your needs are JoeJulian, but what I ended up with for my compute nodes cost about $450 each before disk.  I payed a bit extra for 80+ Gold PSUs since I'm trying to manage my power as carefully as possible.
15:06 Kodiak1 I figured for now I could get away with a single 8GB DDR3 1600 dimm on each box, wait until 2x16G pairs were under $90 to upgrade, shouldn't be long.
15:06 Kodiak1 They aren't racecars but they let me do OpenStack etc at home.
15:06 dbruhn Kodiak, what compute nodes are you using?
15:07 Kodiak1 They are cobbled from Newegg, one sec.
15:10 Kodiak1 RAM:  1x http://www.newegg.com/Product/Pr​oduct.aspx?Item=N82E16820148558  | MOBO:  http://www.newegg.com/Product/Pr​oduct.aspx?Item=N82E16813157465  | PSU:  http://www.newegg.com/Product/Pr​oduct.aspx?Item=N82E16817182066  | CPU http://www.newegg.com/Product/Pr​oduct.aspx?Item=N82E16819113288  | CASE:  http://www.amazon.com/Rosewill-Micro-ATX​-Computer-12-5-Inch-LINE-M/dp/B00AAJ0ZGK
15:10 glusterbot <http://goo.gl/GVXJCE> (at www.newegg.com)
15:11 dbruhn I have been using these HP ProLiant DL360 G5's they come with dual 4core x5355 2.66 2x146gb SAS drives, and 32GB of ram, I can usually scrounge and find them with a warrantee for 200-250 each
15:11 dbruhn for cloudstack
15:12 johnbot11 joined #gluster
15:12 Kodiak1 That was assembled with an goals for balancing current capability w/ future upgrade potential, and also for energy efficiency, I'm limited on open 15 amp circuits
15:14 Kodiak1 Those ProLiants are cool - I just don't have the space or energy
15:14 ababu joined #gluster
15:14 dbruhn They aren't too bad on power really
15:14 dbruhn if I had e series proc's they would be way better
15:14 Kodiak1 Nice - maybe my conceptions of Xeon are out of date
15:15 Kodiak1 We ditched those where possible maybe 7+ years ago.
15:15 dbruhn the x series are the high power ones, the e series are way better about power
15:15 dbruhn I have some 2u 12 drive boxes with e series proc's, 16 gb of ram, 15k sas drives, and they really only use about 3/4 amps of power each under load.
15:16 dbruhn at 110
15:16 Kodiak1 Sweet - I wonder what that works out to in watt hours
15:17 JoeJulian Amps * Voltage * Time = Watt Hours
15:17 dbruhn I honestly have no clue, my DC charges me in 5kva blocks, for both a and b power, so I just know if I break the threshold I need to by another block
15:17 dbruhn never really think about it beyond that
15:18 dbruhn They do throw in a rack every time I buy a power block though....
15:18 dbruhn after I started creating hot spots in the rows
15:22 dbruhn JoeJulian, have you ever seen it where when running deletes the system will constantly balk about directories not being empty, and then when you ls them they are, and the delete process will move past it?
15:23 JoeJulian dbruhn: Yes. Usually from left-over self-heal issues.
15:23 dbruhn Would it be worth filing a bug that the system should try and correct the issue and retry the delete to get past it?
15:24 dbruhn I am assuming the process causes a self-heal and that's why it works the second time
15:25 JoeJulian That's been my assumption, too. I haven't taken the time to find the cause so I haven't filed it myself.
15:33 vpshastry joined #gluster
15:39 ndk joined #gluster
15:40 vpshastry left #gluster
15:41 aixsyd hey infiniband guys
15:41 aixsyd http://www.ebay.com/itm/INFINIBAND-4X-DDR-P​CIe-DUAL-PORT-HCA-MHGH28-XTC-/111188113553?​pt=COMP_EN_Servers&amp;hash=item19e353f891
15:41 glusterbot <http://goo.gl/jNAdEi> (at www.ebay.com)
15:41 aixsyd What sort of cable would I need to connect two of these together? point to point
15:41 dneary joined #gluster
15:41 aixsyd standard CX4?
15:42 aixsyd just two of the
15:42 aixsyd *them
15:43 dbruhn ouch, that's expensive for ddr cards
15:43 dbruhn for about that price you can get QDR cards
15:43 aixsyd link? I cant find any..
15:44 dbruhn I got my  DDR cards for $25 each and the average cost if I remember right was $35-45
15:44 aixsyd pci-e cards?
15:44 dbruhn yep
15:44 aixsyd cheapest QDR on ebay right now for BIN: http://www.ebay.com/itm/QLOGIC-QLE7340-4x-​QDR-InfiniBand-40Gbps-Host-Channel-Adapter​-HCA-IB6410401-/190905316357?pt=US_Interna​l_Network_Cards&amp;hash=item2c72d7f405
15:44 glusterbot <http://goo.gl/LNvrJI> (at www.ebay.com)
15:45 Skaag joined #gluster
15:45 dbruhn http://www.ebay.com/itm/like/131030190795?lpid=82
15:45 glusterbot <http://goo.gl/C1QAeA> (at www.ebay.com)
15:45 dbruhn not sure if this is the right one
15:46 aixsyd thats ddr
15:46 dbruhn https://www.serversupply.com/products/p​art_search/pid_find.asp?pid=119886&amp;
15:46 glusterbot <http://goo.gl/DmYUI2> (at www.serversupply.com)
15:46 dbruhn yeah they are half the price of the ones you are looking at
15:46 dbruhn one sec on the qdr
15:46 dbruhn the qdr ones are harder to come by
15:46 aixsyd thats awesome, though.
15:47 glusterbot New news from newglusterbugs: [Bug 1025404] Delete processes exiting with directory not empty error. <http://goo.gl/9TJ9zC> || [Bug 1025411] Enhancement: Self Heal Selection Automation <http://goo.gl/WjPhry>
15:50 aixsyd any recomendations on a cheap, managed 24-port gig switch?
15:50 dbruhn look for the voltaire stuff
15:50 dbruhn -m means it has the subnet manager
15:50 dbruhn if you are going from one server to the next though, you don't really need a switch in the mix
15:51 aixsyd i mean for standard ethernet
15:51 aixsyd not infiniband
15:51 aixsyd i'm looking at another Dell Powerconnect 5224
15:51 dbruhn oh sorry, cisco 4948
15:52 dbruhn get's you 48 ports and dual power supplies, the 24 port version is the same everything but can't put a second psu in it
15:52 dewey joined #gluster
15:53 aixsyd $300-400 vs $69 shipped for the powerconnect...
15:53 aixsyd :P
15:53 dbruhn haha true
15:55 aixsyd http://www.ebay.com/itm/INFINIBAND-4X-DDR-PC​Ie-DUAL-PORT-HCA-MHGH28-XTC-/111188113553?pt​=COMP_EN_Servers&amp;hash=item19e353f891say I go with this infiniband card:
15:55 glusterbot <http://goo.gl/T4H6Kg> (at www.ebay.com)
15:55 aixsyd CX4 cables okay?
15:56 aixsyd http://www.ebay.com/itm/Meritec-700428​-08-Infiniband-Cable-CX4-SFF-8470-SAS-​9-5-28AWG-10Gbps-/370929741544?pt=LH_D​efaultDomain_0&amp;hash=item565d22aee8 - for example
15:56 glusterbot <http://goo.gl/uUTKHN> (at www.ebay.com)
15:56 dbruhn http://www.ebay.com/itm/New-Infiniband-10GB​s-4X-CX4-to-CX4-Cable-1M-3-3FT-SAS-M-M-/251​200441924?ssPageName=ADME:X:BOIPACC:US:3160
15:56 glusterbot <http://goo.gl/ErO03u> (at www.ebay.com)
15:56 dbruhn this guy will sell you the cables for 14$ per if you make an offer
15:57 dbruhn jclift has ordered a bunch of stuff from him and I ordered 10 cables from him
15:57 glusterbot New news from resolvedglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
15:57 aixsyd what sort of delivery time? from china..
15:57 harish_ joined #gluster
15:57 dbruhn I think it was like a a week or two if I remember right
15:58 kPb_in joined #gluster
16:01 aixsyd gonna try to see if i can get management to sign off on one test cluster now.. as opposed to after the holidays
16:03 aixsyd $595 shipped now for two servers, 2 dual gig cards, 2 infiniband cards, cables, and a switch, then $526 for the second cluster after the holidays. or $1200 after the holidays and less testing time
16:12 semiosis partner: kkeithley: i updated the ubuntu packages so that the log dir is created in the -common package, but didnt do that for the debian packages.  i will.  thanks for pointing this out
16:17 glusterbot New news from newglusterbugs: [Bug 1025415] Enhancement: Server/Brick Maintenance Mode <http://goo.gl/dz9sXL>
16:25 diegows joined #gluster
16:28 mohankumar joined #gluster
16:30 Mo_ joined #gluster
16:47 glusterbot New news from newglusterbugs: [Bug 1025425] Enhancement: File grows beyond available size of brick <http://goo.gl/pTi1uG>
16:50 shylesh joined #gluster
16:52 zaitcev joined #gluster
16:56 ira_ joined #gluster
17:05 kkeithley semiosis: oh, I didn't realize that you were the source of the Debian dpkgs. I thought there was someone else doing those.
17:07 semiosis I am large, I contain multitudes.
17:09 andreask joined #gluster
17:16 aixsyd is there a way to watch the progress of a gluster self-heal?
17:18 hagarth joined #gluster
17:18 aixsyd i'm used to watch cat /proc/mdstat for example
17:19 aixsyd i'm not seeing anything like that for Glusterfs though
17:19 T0aD yeah /proc/mdstat is sexy
17:19 T0aD no idea for gluster though
17:33 Kodiak1 kkeithley:  I did my best to carry over the missing feature request of strong authentication from the 3.4.0 roadmap page to the 3.5 page:  http://www.gluster.org/community/docume​ntation/index.php/Strong_Authentication
17:33 glusterbot <http://goo.gl/ZY39hV> (at www.gluster.org)
17:34 kkeithley cool, thanks
17:35 Kodiak1 Hopefully it's not to ugly - it's my first foray into participation w/ this project.  I'd like to help out w/ documentation as time allows at some point as well.  I have a couple years of experience w/ technical writing (doc as you go sysadmin) work at the university I work for.
17:36 aixsyd this is really werid... i'm getting VM freezes any time i try to kill a node. what the heck happened? D:
17:37 semiosis aixsyd: maybe a ,,(ping-timeout) ?
17:37 glusterbot aixsyd: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
17:38 semiosis http://joejulian.name/blog/keeping-yo​ur-vms-from-going-read-only-when-enco​untering-a-ping-timeout-in-glusterfs/
17:38 glusterbot <http://goo.gl/N20EJC> (at joejulian.name)
17:38 aixsyd if I wanna shorten that for testing purposes - how would i?
17:38 semiosis maybe that is related?
17:38 semiosis aixsyd: all options are set with 'gluster volume set' you can get a list with 'gluster volume set help'
17:39 aixsyd sweet.
17:39 aixsyd semiosis: you dont happen to know of a way to watch glusterfs heal progress, do you?
17:40 semiosis no, but have you looked at the log files on the serves/
17:40 semiosis ?
17:40 semiosis servers?
17:40 aixsyd not yet
17:42 ngoswami joined #gluster
17:49 aixsyd weird - the heal doesnt seem to start right away, when the killed node comes back
17:49 aixsyd it rejoins the cluster, but....
17:51 rotbeard joined #gluster
17:53 lalatenduM joined #gluster
17:53 aixsyd if i say gluster volume heal, it starts - shouldnt it auto start?
17:55 jclift_ joined #gluster
17:58 Technicool joined #gluster
18:00 marbu joined #gluster
18:01 ngoswami joined #gluster
18:07 aixsyd semiosis: I set the timout to 5s. lets see what happens >)
19:20 P0w3r3d joined #gluster
19:20 PatNarciso joined #gluster
19:20 ngoswami joined #gluster
19:20 Technicool joined #gluster
19:20 lalatenduM joined #gluster
19:20 rotbeard joined #gluster
19:20 hagarth joined #gluster
19:20 ira joined #gluster
19:20 zaitcev joined #gluster
19:20 Mo_ joined #gluster
19:20 diegows joined #gluster
19:20 harish_ joined #gluster
19:20 Skaag joined #gluster
19:20 dneary joined #gluster
19:20 ndk joined #gluster
19:20 johnbot11 joined #gluster
19:20 lpabon joined #gluster
19:20 bugs_ joined #gluster
19:20 daMaestro joined #gluster
19:20 Guest54292 joined #gluster
19:20 chirino joined #gluster
19:20 failshell joined #gluster
19:20 edward1 joined #gluster
19:20 wushudoin joined #gluster
19:20 bennyturns joined #gluster
19:20 kaptk2 joined #gluster
19:20 rwheeler_ joined #gluster
19:20 aixsyd joined #gluster
19:20 ninkotech joined #gluster
19:20 rcheleguini joined #gluster
19:20 yosafbridge joined #gluster
19:20 RedShift joined #gluster
19:20 ctria joined #gluster
19:20 JonathanD joined #gluster
19:20 marcoceppi joined #gluster
19:20 masterzen joined #gluster
19:20 gluster-favorite joined #gluster
19:20 bivak joined #gluster
19:20 twx joined #gluster
19:20 y4m4 joined #gluster
19:20 Kodiak1 joined #gluster
19:20 nhm joined #gluster
19:20 pdrakeweb joined #gluster
19:20 verdurin_ joined #gluster
19:20 X3NQ joined #gluster
19:20 dblack joined #gluster
19:20 badone_gone joined #gluster
19:20 edain joined #gluster
19:20 gunthaa__ joined #gluster
19:20 Cenbe joined #gluster
19:20 tg2 joined #gluster
19:20 Shdwdrgn joined #gluster
19:20 xpinex joined #gluster
19:20 mibby- joined #gluster
19:20 roidelapluie joined #gluster
19:20 __NiC joined #gluster
19:20 GLHMarmo1 joined #gluster
19:20 Xunil__ joined #gluster
19:20 samppah joined #gluster
19:20 Gugge joined #gluster
19:20 VerboEse joined #gluster
19:20 ke4qqq joined #gluster
19:20 eightyeight joined #gluster
19:20 efries_ joined #gluster
19:20 foster joined #gluster
19:20 ThatGraemeGuy joined #gluster
19:20 rubbs joined #gluster
19:20 klaxa joined #gluster
19:20 vpagan joined #gluster
19:20 NuxRo joined #gluster
19:20 T0aD joined #gluster
19:20 stickyboy joined #gluster
19:20 baoboa joined #gluster
19:20 sysconfig joined #gluster
19:20 bstr joined #gluster
19:20 rwheeler joined #gluster
19:20 ofu_ joined #gluster
19:20 crashmag joined #gluster
19:20 gmcwhistler joined #gluster
19:20 GabrieleV joined #gluster
19:20 m0zes joined #gluster
19:20 social joined #gluster
19:20 Norky joined #gluster
19:20 shanks joined #gluster
19:20 compbio joined #gluster
19:20 Alex joined #gluster
19:20 qubit joined #gluster
19:20 mjrosenb joined #gluster
19:20 gGer joined #gluster
19:20 elyograg joined #gluster
19:20 DataBeaver joined #gluster
19:20 cfeller joined #gluster
19:20 jbrooks joined #gluster
19:20 dbruhn joined #gluster
19:20 uebera|| joined #gluster
19:20 gluslog joined #gluster
19:20 msciciel joined #gluster
19:20 TDJACR joined #gluster
19:20 msvbhat joined #gluster
19:20 the-me joined #gluster
19:20 mtanner_ joined #gluster
19:20 codex joined #gluster
19:20 delhage joined #gluster
19:20 _Bryan_ joined #gluster
19:20 helmo joined #gluster
19:20 JordanHackworth joined #gluster
19:20 Guest44777 joined #gluster
19:20 RobertLaptop joined #gluster
19:20 saltsa_ joined #gluster
19:20 tomased joined #gluster
19:20 AndreyGrebenniko joined #gluster
19:20 sjoeboo joined #gluster
19:20 micu joined #gluster
19:20 jmeeuwen joined #gluster
19:20 a2 joined #gluster
19:20 askb joined #gluster
19:20 JonnyNomad joined #gluster
19:20 [o__o] joined #gluster
19:20 brosner joined #gluster
19:20 nixpanic joined #gluster
19:20 cyberbootje joined #gluster
19:20 morse joined #gluster
19:20 sac`away joined #gluster
19:20 samsamm joined #gluster
19:20 nonsenso joined #gluster
19:20 lanning joined #gluster
19:20 jord-eye joined #gluster
19:20 abyss^ joined #gluster
19:20 _br_ joined #gluster
19:20 purpleidea joined #gluster
19:20 juhaj_ joined #gluster
19:20 kbsingh joined #gluster
19:20 FooBar joined #gluster
19:20 samkottler joined #gluster
19:20 wgao joined #gluster
19:20 ccha joined #gluster
19:20 mdjunaid joined #gluster
19:20 wrale joined #gluster
19:20 fyxim joined #gluster
19:20 portante joined #gluster
19:20 mattf joined #gluster
19:20 bdperkin joined #gluster
19:20 esalexa|mtg joined #gluster
19:20 bfoster joined #gluster
19:20 kkeithley joined #gluster
19:20 Debolaz joined #gluster
19:20 SteveCooling joined #gluster
19:20 primusinterpares joined #gluster
19:20 glusterbot joined #gluster
19:20 georgeh|workstat joined #gluster
19:20 edong23 joined #gluster
19:20 xavih joined #gluster
19:20 al joined #gluster
19:20 tru_tru joined #gluster
19:20 atrius` joined #gluster
19:20 partner joined #gluster
19:20 Remco joined #gluster
19:20 ndevos joined #gluster
19:20 zwu joined #gluster
19:20 lkoranda joined #gluster
19:20 basic` joined #gluster
19:20 ultrabizweb joined #gluster
19:20 NeatBasis joined #gluster
19:20 eryc joined #gluster
19:20 semiosis joined #gluster
19:20 mrEriksson joined #gluster
19:20 ingard__ joined #gluster
19:20 haakon__ joined #gluster
19:20 Ramereth joined #gluster
19:20 VeggieMeat joined #gluster
19:20 Peanut joined #gluster
19:20 stigchristian joined #gluster
19:20 stopbit joined #gluster
19:20 schrodinger joined #gluster
19:20 JoeJulian joined #gluster
19:20 paratai joined #gluster
19:20 l0uis joined #gluster
19:20 johnmark joined #gluster
19:20 fkautz joined #gluster
19:20 pachyderm joined #gluster
19:20 jiffe99 joined #gluster
19:20 aurigus joined #gluster
19:20 Kins joined #gluster
19:20 cicero joined #gluster
19:20 Rydekull joined #gluster
19:31 davidbierce joined #gluster
20:11 _BryanHm_ joined #gluster
20:11 ricky-ticky joined #gluster
20:14 zerick joined #gluster
20:22 B21956 joined #gluster
20:40 dbruhn does du initiate a self heal?
20:49 harish joined #gluster
21:00 JoeJulian no
21:02 dbruhn will this work? "find /mnt/ENTV04EP/root/data/1/6/8/11/1/3 |xargs stat" the stat everything under 3
21:02 JoeJulian yes
21:04 dbruhn I swear this damn think is possessed and doesn't want to give up the data
21:29 plarsen joined #gluster
21:29 B21956 left #gluster
21:37 elyograg the gluster rebalance that I started two days ago has just hit 1000GB migrated.  Only 9 terabytes to go.
21:38 elyograg what would make it so slow, and how can I speed it up?  It's got its own LAN segment to work on.
21:40 dbruhn elyograg, I've asked the same question and never gotten a response....
21:45 elyograg I get that it's a lot of data, and that gluster has overhead.  But 42 hours to move one terabyte? that's slower than I would expect even on a fast ethernet link (100Mb/s) if it were half-duplex.
21:46 Alex at least it didn't take 3 months! (http://gluster.org/pipermail/glust​er-users/2011-October/009049.html)
21:46 glusterbot <http://goo.gl/CVI76x> (at gluster.org)
21:46 elyograg a bnadwidth calculator says that 1000GB transfered over a T3 link (45Mb/s) would take 53 hours.
21:52 a2 elyograg, try: gluster volume set <name> performance.enable-least-priority off
21:52 a2 that should move rebalance from the least priority processing to normal priority
21:55 dbruhn I believe the system has to go through and do all the DHT calculations, etc.
21:55 dbruhn a2, what version was that introduced in?
21:58 a2 dbruhn, it's been around for a while now IIRC
21:58 a2 though I'm not sure how much of an effect it would have.. haven't measured perf difference
21:59 dbruhn I am just about to add a couple servers/bricks to a system thats 3.3.2 is why I ask
22:00 a2 3.3 had it i think
22:05 calum_ joined #gluster
22:06 failshell joined #gluster
22:11 elyograg a2 I set that volume property.  no immediate sign that it's made a sinificant difference.  user traffic on the volume should be low.
22:17 elyograg iptraf statistics show less than 100000 kbits/s on the local LAN segment and pretty much nothing on the client side.  loopback is in the neighborhood of 30000 kbits/s.
22:24 elyograg https://dl.dropboxusercontent.com​/u/97770508/gluster-lan-stats.png
22:24 glusterbot <http://goo.gl/aD9gsf> (at dl.dropboxusercontent.com)
22:25 elyograg bond0 (client side) is composed of em1 and em2, bond1 (the inter-server segment) is composed of em3 and em4.
22:25 elyograg I think the only reason that bond0 has any traffic at all is because iptraf is running and putting ssh traffic on it.
22:43 johnbot11 joined #gluster
22:56 ctria joined #gluster
23:08 nasso joined #gluster
23:38 RicardoSSP joined #gluster
23:56 davidbierce joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary