Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 dtyarnell joined #gluster
00:06 StarBeast joined #gluster
00:16 askb joined #gluster
00:28 zaitcev joined #gluster
00:33 samppah joined #gluster
00:43 y4m4 joined #gluster
00:43 y4m4 joined #gluster
01:17 harish_ joined #gluster
01:27 asias joined #gluster
02:24 satheesh1 joined #gluster
02:39 bharata-rao joined #gluster
02:49 zwu joined #gluster
02:50 Felix458 joined #gluster
02:53 jag3773 joined #gluster
02:57 vshankar joined #gluster
03:00 vshankar joined #gluster
03:00 MugginsM joined #gluster
03:06 kshlm joined #gluster
03:17 hflai_ joined #gluster
03:17 the-me_ joined #gluster
03:17 vshankar_ joined #gluster
03:17 JonathanS joined #gluster
03:20 ThatGraemeGuy_ joined #gluster
03:21 bdperkin_ joined #gluster
03:23 MediaSmurf joined #gluster
03:23 harish_ joined #gluster
03:23 nixpanic joined #gluster
03:24 nixpanic joined #gluster
03:24 esalexa|gone joined #gluster
03:25 bulde joined #gluster
03:25 rc10 joined #gluster
03:26 SteveCooling joined #gluster
03:28 verdurin joined #gluster
03:29 asias joined #gluster
03:37 shubhendu joined #gluster
03:40 hagarth joined #gluster
03:40 raghu joined #gluster
03:43 itisravi_ joined #gluster
03:45 raar joined #gluster
03:47 heavypennies joined #gluster
03:47 kanagaraj joined #gluster
03:49 itisravi joined #gluster
03:49 RameshN joined #gluster
03:51 itisravi joined #gluster
03:51 davinder joined #gluster
03:52 spandit joined #gluster
03:53 saurabh joined #gluster
03:58 aravindavk joined #gluster
03:59 shylesh joined #gluster
03:59 shyam joined #gluster
04:01 PatNarciso Happy Wednesday #Gluster.
04:08 bulde joined #gluster
04:21 vpshastry1 joined #gluster
04:26 dusmant joined #gluster
04:30 bharata-rao joined #gluster
04:30 ndarshan joined #gluster
04:30 askb joined #gluster
04:31 nonsenso_ joined #gluster
04:32 vshankar joined #gluster
04:32 nonsenso_ i'm trying to add a new brick to a 3-way replicated glusterfs volume and getting an error, "/export/provisioner-02 or a prefix of it is already part of a volume"
04:32 glusterbot nonsenso_: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
04:33 nonsenso_ glusterbot: xoxo.  didn't see that bug from my research.  good botty.  :)
04:33 ppai joined #gluster
04:34 nonsenso_ so is there a workaround for this?  i'm running 3.3.1
04:41 askb left #gluster
04:41 askb joined #gluster
04:42 anands joined #gluster
04:43 glusterbot New news from newglusterbugs: [Bug 1003184] EL5 package missing %_sharedstatedir macro <http://goo.gl/Yp1bL1>
04:54 bulde joined #gluster
04:55 bala joined #gluster
04:58 meghanam joined #gluster
04:59 rjoseph joined #gluster
05:09 spandit joined #gluster
05:17 bharata-rao joined #gluster
05:28 bulde joined #gluster
05:35 psharma joined #gluster
05:39 nshaikh joined #gluster
05:46 mohankumar joined #gluster
05:51 JoeJulian file a bug
05:51 glusterbot http://goo.gl/UUuCq
05:53 kPb_in joined #gluster
05:55 rc10 joined #gluster
05:56 bulde joined #gluster
05:56 JoeJulian bulde!
05:57 JoeJulian bulde?
05:57 * JoeJulian pokes amar to see if he's actually there...
06:06 ngoswami joined #gluster
06:14 bulde joined #gluster
06:14 shubhendu joined #gluster
06:17 saltsa joined #gluster
06:18 dusmant joined #gluster
06:20 RameshN_ joined #gluster
06:21 psharma_ joined #gluster
06:22 FooBar_ joined #gluster
06:22 saltsa_ joined #gluster
06:23 jtux joined #gluster
06:24 yosafbridge` joined #gluster
06:29 rastar joined #gluster
06:31 badone joined #gluster
06:31 vimal joined #gluster
06:32 xymox joined #gluster
06:36 PatNarciso joined #gluster
06:36 chirino_m joined #gluster
06:37 shyam joined #gluster
06:39 ndarshan joined #gluster
06:49 satheesh joined #gluster
06:54 _ndevos joined #gluster
06:56 tziOm joined #gluster
06:56 ricky-ticky joined #gluster
07:02 tzi0m joined #gluster
07:03 mgebbe_ joined #gluster
07:05 eseyman joined #gluster
07:08 BTool joined #gluster
07:10 keytab joined #gluster
07:14 Guest82007 joined #gluster
07:18 ThatGraemeGuy joined #gluster
07:20 deepakcs joined #gluster
07:25 bulde joined #gluster
07:26 keytab joined #gluster
07:27 ndarshan joined #gluster
07:33 davinder joined #gluster
07:37 andreask joined #gluster
08:04 an joined #gluster
08:11 bulde joined #gluster
08:13 RobertLaptop joined #gluster
08:18 Lethalman joined #gluster
08:25 rgustafs joined #gluster
08:28 eseyman joined #gluster
08:32 TBlaar joined #gluster
08:33 dusmant joined #gluster
08:33 Norky joined #gluster
08:35 asias joined #gluster
08:43 satheesh joined #gluster
08:45 purpleidea PatNarciso: what happens on wednesday?
08:45 raghu joined #gluster
08:46 crashmag joined #gluster
08:48 glusterbot New news from resolvedglusterbugs: [Bug 968227] Add AUTH support for sub-directory level NFS exports <http://goo.gl/2bo1ru>
08:54 ndarshan joined #gluster
08:57 RameshN_ joined #gluster
09:00 DV__ joined #gluster
09:04 micu2 joined #gluster
09:20 ngoswami joined #gluster
09:20 DV joined #gluster
09:20 StarBeast joined #gluster
09:22 rjoseph joined #gluster
09:26 davinder joined #gluster
09:27 psharma_ joined #gluster
09:42 rjoseph joined #gluster
09:44 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
09:44 tryggvil joined #gluster
09:49 RameshN_ joined #gluster
09:51 ppai joined #gluster
09:54 ndarshan joined #gluster
10:00 dusmant joined #gluster
10:18 davinder joined #gluster
10:25 ngoswami joined #gluster
10:27 hybrid512 joined #gluster
10:33 harish_ joined #gluster
10:35 psharma_ joined #gluster
10:43 hagarth joined #gluster
10:44 jclift joined #gluster
10:48 glusterbot New news from resolvedglusterbugs: [Bug 994745] bricks are no longer being stopped on shutdown/reboot <http://goo.gl/PA7KlA>
10:55 tryggvil joined #gluster
10:58 davinder2 joined #gluster
10:59 ppai joined #gluster
11:05 StarBeast joined #gluster
11:05 andreask joined #gluster
11:05 StarBeast joined #gluster
11:11 jtux joined #gluster
11:14 glusterbot New news from newglusterbugs: [Bug 1016000] Implementation of object handle based gfapi extensions <http://goo.gl/y8Xo7P>
11:21 dusmant joined #gluster
11:21 Remco joined #gluster
11:23 hagarth joined #gluster
11:28 an joined #gluster
11:35 kanagaraj joined #gluster
11:44 glusterbot New news from newglusterbugs: [Bug 1017176] Until RDMA handling is improved, we should output a warning when using RDMA volumes <http://goo.gl/viLDZq>
11:48 eseyman joined #gluster
11:51 ppai joined #gluster
11:55 satheesh joined #gluster
11:58 Alpinist joined #gluster
11:59 jclift left #gluster
12:04 shyam left #gluster
12:18 glusterbot New news from resolvedglusterbugs: [Bug 819130] Merge in the Fedora spec changes to build one single unified spec <http://goo.gl/GfSUw>
12:21 DV__ joined #gluster
12:24 edward1 joined #gluster
12:34 dusmant joined #gluster
12:34 an joined #gluster
12:36 abradley joined #gluster
12:45 hybrid5121 joined #gluster
12:46 Remco joined #gluster
12:47 morse joined #gluster
12:49 glusterbot New news from resolvedglusterbugs: [Bug 950083] Merge in the Fedora spec changes to build one single unified spec <http://goo.gl/tajoiQ>
12:49 jclift joined #gluster
12:58 KORG joined #gluster
12:58 bet_ joined #gluster
12:59 KORG Guys, anyone can comment this problem:  https://bugzilla.redhat.co​m/show_bug.cgi?id=1017215
12:59 KORG ?
12:59 glusterbot <http://goo.gl/3v0PmL> (at bugzilla.redhat.com)
12:59 glusterbot Bug 1017215: high, unspecified, ---, amarts, NEW , Replicated objects duplicates
13:01 B21956 joined #gluster
13:03 rc10 joined #gluster
13:03 jdarcy joined #gluster
13:10 dtyarnell joined #gluster
13:11 tryggvil joined #gluster
13:15 glusterbot New news from newglusterbugs: [Bug 1017215] Replicated objects duplicates <http://goo.gl/3v0PmL>
13:19 _Bryan_ Morning All
13:22 ababu joined #gluster
13:24 santir joined #gluster
13:27 rsanti joined #gluster
13:33 nshaikh left #gluster
13:34 vpshastry1 left #gluster
13:34 chirino joined #gluster
13:36 squizzi joined #gluster
13:46 dusmant joined #gluster
13:50 bashtoni joined #gluster
13:51 bashtoni Beyond checking the underlying disks, what can I do to diagnose sudden poor performance with gluster?
13:54 Remco Check CPU and network utilisation?
13:54 bashtoni Remco: 49% CPU free, network utilisation is lower than normal
13:55 Remco Then I guess I would run tools like iotop
13:56 abradley I've got a gluster cluster made up of two ubu12server machines with one brick in each making a replicated across the two machines. I've setup a new node "glusterfs3" (ubuntu 12 server as well). how do I add a partition on it to be a "brick" in the existing replicated volume?
13:57 l0uis abradley: you must add bricks in # replicas groups
13:57 l0uis so if you have replicas 2, you need to add 2 additional bricks. you can't have 3 bricks in a replica 2 setup as far as i know.
13:57 dtyarnell joined #gluster
13:58 l0uis so you can either add 2 additional bricks or create a new gluster volume w/ replica 1 and add your new brick to that
13:58 kkeithley But you can add a single brick to make it a "replica 3" volume
13:58 l0uis or that
13:58 squizzi left #gluster
13:58 abradley I want to add a third brick to this replicated volume
13:59 l0uis then you must make it a replica 3 volume as kkeithley said, your volume size will not increase
13:59 l0uis but you will have 3 copies of each file for extra redundancy
13:59 abradley I'm not looking to add size, just redundancy
13:59 abradley poifect
13:59 abradley How would I go about adding this third brick to this replicated volume in ubuntu 12?
13:59 abradley I'm having trouble finding such a guide
14:00 l0uis also make sure you understand that your write performance will now be client_bandwidth / 3
14:00 l0uis look at the add-brick bits in the docs
14:02 emitor joined #gluster
14:02 rwheeler joined #gluster
14:06 kkeithley `gluster volume add brick help`,    Usage: volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ...
14:07 emitor hi! I have two servers with a gluster replica volume configured. I'm testing some things, i have the volume mounted by NFS on both servers, but if one of them lost connectivity and keep writing after i connect it again to the network i get some inconsistent files. I guess it's because of the syncronization. How could I configure gluster syncronizat
14:07 emitor ions?
14:08 l0uis emitor: first step might be to mount it via the gluster fuse client rather than nfs if you can.
14:09 emitor I can't do that... it must be mounted by nfs
14:09 bugs_ joined #gluster
14:10 lpabon joined #gluster
14:11 dbruhn joined #gluster
14:12 dbruhn I am having a weird issue with invalid argument errors on one of my volumes
14:13 mooperd_ joined #gluster
14:15 vpshastry joined #gluster
14:18 ndk joined #gluster
14:19 wushudoin joined #gluster
14:20 kaptk2 joined #gluster
14:20 zaitcev joined #gluster
14:26 abradley when using :  volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK>     assuming that I already have a 2-rep volume "vol1" (on glusterfs1 and glusterfs2) then would I run  volume add-brick vol1 rep 3 <NEW BRICK>     or    volume add-brick vol1 rep 1 <NEW BRICK> ?
14:28 abradley my question is really the "rep 1" or "rep 3" part
14:28 abradley is the "rep #" referring to the number of bricks that I'm adding or the number of bricks in the updated volume?
14:29 kkeithley gluster volume add brick $existing_volume replica 3 $newbrick
14:29 kkeithley turns your existing replica 2 volume into a replica 3 volume by adding the new brick
14:29 abradley thanks kkeithley
14:30 dbruhn Has the RDMA issues with 3.4 been taken care of yet?
14:30 dbruhn or is there any thing I can do to help get it resolved?
14:35 kkeithley I haven't seen any commits. I heard (a rumor) that we have mlx helping with the work. Once patches appear in gerrit you can certainly help by applying the patches and testing, and then reviewing the patches in gerrit. Anything else you can do to help is all good too.
14:37 dbruhn I have an extra set of servers and some older DDR IB hardware here I can use to put it up on. I am just having some health issues with one of my RDMA volumes, and I've heard a lot of fixes went into 3.4
14:37 shylesh joined #gluster
14:41 Alpinist joined #gluster
14:42 bala joined #gluster
14:44 kkeithley I'm not sure I'd say "a lot". The release notes for 3.4.0 only say experimental use of RDMA-connection manager (RDMA-CM). At one point I set up RDMA on a set of four Fedora 19 boxes. I believe jclift did something similar on four RHEL6 boxes. It worked, but I didn't push it very hard once it was up and running.
14:49 abradley How do you add a gluster peer? http://paste.ubuntu.com/6214037/
14:49 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:50 abradley nvm, ran it from a peer already in cluster and added glusterfs3
14:52 [o__o] left #gluster
14:54 XpineX_ joined #gluster
14:54 [o__o] joined #gluster
14:55 H__ Question: can gluster be used to replicate a volume to another location , without even using the DHT ?
14:55 abradley I'm attempting to add a brick on glusterfs3 from glusterfs1 and I'm getting this error: http://paste.ubuntu.com/6214073/
14:55 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:56 kkeithley H__: yes, see docs on geo-replication
14:56 abradley nvm, i needed to add the FQDN, not just hostname
14:57 kkeithley and DHT is distribution, no relationship to replication or geo-replication.
14:57 kkeithley Distributed Hash Table
14:58 jclift kkeithley dbruhn: Yeah, RDMA is still broken.  It'll be a while until I personally look at it again too. :(
14:59 RameshN joined #gluster
15:02 dbruhn jclift, do you still have my email address?
15:03 abradley anyone here have experience settings up FUSE cllient on ubuntu 12?
15:03 l0uis abradley: what is there to setup? you just install it
15:04 klaxa|web joined #gluster
15:04 kkeithley then mount volumes with `mount -t glusterfs $host:$volname $mntpoint`
15:05 abradley what do you install?
15:05 abradley glusterfs-server?
15:05 H__ kkeithley: can it also be done with a crafted vol file 'type cluster/replicate' instead of geo-replication ?
15:05 abradley glusterfs-fuse?
15:05 asias joined #gluster
15:05 kkeithley no, -server is for <wait for it> servers
15:07 l0uis abradley: glusterfs-client
15:07 abradley ah, ok
15:07 abradley thanks
15:08 klaxa|web is there a way to track the self-healing in glusterfs 3.3.2? i.e. get the time once it's finished or am i doomed to cross-check hashes every once in a while?
15:08 kkeithley should be glusterfs-client, which probably automagically pulls in glusterfs, glusterfs-common and glusterfs
15:09 kkeithley er, glusterfs-common and glusterfs-fuse
15:15 roo9 left #gluster
15:16 abradley I now have hosts glusterfs1 glusterfs2 and glusterfs3 in a node with a brick on each replicating vol1. I have host glusterfs with glusterfs-client installed. How do I make glusterfs control vol1 ?
15:17 l0uis control?
15:17 l0uis you want to mount it?
15:17 abradley my understanding is that without a client, you can only access one brick at a time directly
15:18 dbruhn you shouldn't be manipulating the data in the bricks directly
15:18 dbruhn glisters doesn't control the volume
15:18 abradley this is why I've installed glusterfs-client on another machine
15:18 dbruhn the gluster commands control the volumes regardless of which node you are on
15:19 l0uis you should never access a brick directly
15:19 rc10 joined #gluster
15:19 abradley directly=?
15:19 dbruhn the glsuterfs client is used to mount the file system
15:19 l0uis abradley: you mount the volume with either the fuse client or nfs
15:19 abradley gotcha
15:19 abradley thanks.
15:19 dbruhn glisters is the fuse client
15:19 dbruhn gluster
15:19 abradley How do you mount the volume with the client in ubuntu?
15:19 l0uis abradley: both will provide a view of the entire volume, with the fuse client being the preferred method
15:19 l0uis abradley: kkeithley told you earlier
15:19 l0uis 10:04 < kkeithley> then mount volumes with `mount -t glusterfs $host:$volname $mntpoint`
15:20 abradley host = any one of the three hosts on the cluster?
15:20 l0uis yes, any one, doesn't matter
15:20 abradley must I peer probe to join the cluster first?
15:20 l0uis it simply uses the host to pull the config
15:20 l0uis no
15:20 l0uis peer probe is only for adding servers
15:20 abradley thanks for the great info
15:20 abradley ah, ok
15:20 dbruhn yep, the fuse client connects to the first one and gets the peer information and then talks to all of them directly
15:21 jag3773 joined #gluster
15:23 vpshastry joined #gluster
15:23 abradley so, I've got: mount -t glusterfs glusterfs1.rxbenefits.local:/vol1  ... and I'm not sure what to put for mountpoint. This is the mount point on glusterfs1 of the brick?
15:24 dbruhn the next part is where on your computer you want to mount the file system too
15:24 l0uis the mount point is the local directory where you want to mount the volume
15:24 dbruhn mount -t glusterfs glusterfs1.rxbenefits.local:/vol1 /mnt/vol1
15:24 abradley ah, so it can be anything. /vol1, for example
15:24 dbruhn or something like that
15:24 abradley thanks
15:24 kmai007 joined #gluster
15:24 dbruhn yep
15:24 * phox blinks
15:24 phox I believe I speak for everyone here.
15:24 kmai007 hi gluster members
15:25 phox is that a euphemism?
15:25 kmai007 do i just ask a question here?
15:25 l0uis kmai007: you must first pay homage!
15:25 l0uis kmai007: i kid.
15:26 l0uis kmai007: ask away
15:26 abradley oh it didn't like that: http://paste.ubuntu.com/6214218/
15:26 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
15:26 kmai007 paying homage!
15:26 vpshastry left #gluster
15:26 l0uis abradley: the error message is self explanatory
15:26 dbruhn did you create the directory to mount to?
15:26 abradley I did not. Didn't know I needed to. I'll address that dbruhn
15:26 kmai007 is there a way to correct the gluster logs to reflect the current time stamps?
15:27 abradley thanks
15:27 redragon_ joined #gluster
15:27 dbruhn no worries, when using the mount command you need to do that no matter what you are mounting
15:27 kmai007 i'm using  3.4.x
15:27 kmai007 actually its 3.4.1-2
15:27 l0uis kmai007: not sure, sorry.
15:28 kkeithley abradley: gluster is a lot like NFS and CIFS. You mount volumes from remote servers on local mount points.
15:28 itisravi joined #gluster
15:29 abradley so this error says "mount point does not exist" http://paste.ubuntu.com/6214218/
15:29 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
15:29 abradley but "mkdir /mnt" returns "file exists"
15:29 abradley mkdir: cannot create directory `/mnt': File exists
15:30 kkeithley kmai007, timestamps are in UTC, like all good appliances. That way when you're comparing logs from the servers you have scattered around the world they're all using the consistent times and time stamps
15:30 l0uis abradley: the mount point is not /mnt
15:30 l0uis abradley: its /mnt/vol1, exactly what you specified
15:30 kmai007 so I don't have the option, if i'm not a global company to make it central time?
15:30 kkeithley most Linux boxen come with an existing /mnt directory
15:30 jclift dbruhn: Yep, I still have your email address.  I just haven't been touching RDMA stuff recently. :(
15:31 abradley oh, I thought that would be created with the mount gluster command. Thanks
15:31 klaxa|web left #gluster
15:31 klaxa|web joined #gluster
15:31 dbruhn jclift, when you get around to working on it hit me up via email if I am not around IRC and i'll help as much as I can
15:31 kkeithley central Siberian time?
15:31 kkeithley ;-)
15:31 kmai007 well played.... US central time
15:31 kkeithley If you buy EMC or NetApp boxes, their logs are all UTC.
15:32 jclift dbruhn: Definitely. :)
15:32 phox jclift: heh, RDMA won't even mount these days for me :l
15:32 phox took some kicking and screaming to make RDMA bricks come up, mostly due to totally uninformative logs not telling me _what_ file could not be opened
15:32 dbruhn jclift: have you been in touch with mellenox on it? I would be willing to communicate with them directly if it helped.
15:38 kmai007 has anybody experienced poor performance when trying to backup a gluster volume through the FUSE client with netbackup ?
15:42 sprachgenerator joined #gluster
15:46 phox kkeithley: I'd assume central time = UTC :P
15:47 phox seems pretty central numerically
15:48 mooperd_ joined #gluster
15:52 [o__o] joined #gluster
16:04 mistich1 have a question about some test I have been doing on gluster with dd
16:05 mistich1 I ran dd on the physical drives and here is what I got
16:05 mistich1 dd if=/dev/zero of=/mnt/gluster/file bs=1M count=1024 oflag=direct
16:05 mistich1 1024+0 records in
16:05 mistich1 1024+0 records out
16:05 mistich1 1073741824 bytes (1.1 GB) copied, 1.54787 s, 694 MB/s
16:05 mistich1 on the gluster mount here is what I got
16:05 mistich1 dd if=/dev/zero of=/opt/zenoss/perf/file bs=1M count=1024 oflag=direct
16:05 mistich1 1024+0 records in
16:05 mistich1 1024+0 records out
16:05 mistich1 1073741824 bytes (1.1 GB) copied, 5.12858 s, 209 MB/s
16:06 mistich1 is this typical performance drop? or is there something else I can do to get better performance
16:09 l0uis mistich1: what is your network connection to the gluster volume like
16:09 l0uis mistich1: because you're going to be bounded by that
16:09 bdperkin joined #gluster
16:09 mistich1 ------------------------------​------------------------------
16:09 mistich1 Server listening on TCP port 5001
16:09 mistich1 TCP window size: 85.3 KByte (default)
16:09 mistich1 ------------------------------​------------------------------
16:09 mistich1 [  4] local 192.168.245.31 port 5001 connected with 192.168.245.32 port 57714
16:09 mistich1 [ ID] Interval       Transfer     Bandwidth
16:09 mistich1 [  4]  0.0-10.0 sec  11.5 GBytes  9.88 Gbits/sec
16:09 mistich1 ipref test 10gig
16:10 mistich1 with jumbo frames
16:10 l0uis you can just say 10g next time, no need for the paste :)
16:10 mistich1 sorry
16:11 l0uis replicated volume?
16:11 mistich1 Type: Distributed-Replicate
16:11 l0uis what is the replica count
16:11 mistich1 Number of Bricks: 4 x 2 = 8
16:12 l0uis so your effective bandwidth is 418 MB/second
16:12 l0uis since the client must write to 2 bricks at once
16:13 mistich1 where did the 418 MB/second come from?
16:13 l0uis there are likely things you can tweak to get better performance, i am not 10g enabled, so i can't comment.
16:13 l0uis 209 MB/sec above.
16:13 l0uis in a replicated volume, the gluster client is writing to replica_count locations at once.
16:13 l0uis so dd is reporting 209 MB/second, but in reality, gluster is moving 2x that since your volume replica count is 2
16:14 mistich1 ok
16:14 mistich1 can you give me any suggestions on where to start on tweaking
16:14 mistich1 here are a few I put in place
16:14 mistich1 performance.cache-size: 1GB
16:14 mistich1 performance.io-thread-count: 48
16:14 mistich1 performance.read-ahead-page-count: 16
16:15 l0uis sorry, i don't have any experience w/ pushing gluster to max performance beyong 1g network. others might chime in though.
16:15 l0uis you might try a larger file though
16:15 l0uis try a 10GB file and see if performance improves over a larger transfer time.
16:16 mistich1 this test comes from the gluster performance white paper where they were testing 10gig
16:16 l0uis ok
16:17 mistich1 on a 1 gig network what have you tweaked
16:18 l0uis nothing. i can saturate the link w/o any tweaks
16:18 mistich1 cool
16:18 mistich1 i'll keep playing with it  some more
16:25 mistich1 if anyone had any suggestions would appreciate it
16:26 phox yeah you don't need to do much to saturate 1gbps
16:27 phox we have somewhere on the order of 20-40gbps IP throughput so saturating that is a lot harder :x
16:28 mistich1 so is the results I'm getting back from dd the best I'm going to get? or am I testing it wrong?
16:33 Mo__ joined #gluster
16:34 rjoseph joined #gluster
16:37 saurabh joined #gluster
16:37 phox mistich1: depends if your underlying FS is possibly holding you back
16:38 phox say if you're reading from one disk as your underlying brick, you're not gonna saturate something faster than GbE
16:39 compbio_ mistich1: we have a few different types of gluster setups, and typically we don't see much more than a 25% performance hit from local brick access to cross-network access, on one system 375MB/s local to 276MB/s local; on a dm-crypt brick 279MB/s local to 209MB/s network
16:40 compbio_ er 375MB/s local -> 276MB/s network
16:41 compbio_ but we haven't looked at optimizing it because that's adequate for single files for us
16:42 mistich1 see thats what I was expecting but I'm seeing alot more
16:44 redragon_ so any thoughts why when writing to my disk directly I get 650MB+ but writing to gluster I only get 39MB ?
16:49 semiosis redragon_: what are you using to generate that load?  dd?
16:50 semiosis mistich1: dd is not a good benchmark tool.  but if you must, then use bs=1M
16:50 l0uis redragon_: what type of network
16:50 semiosis ah never mind, i see you are
16:50 semiosis mistich1: try mutliple dd in parallel?
16:51 mistich1 no
16:51 redragon_ yes dd
16:51 redragon_ gige local network
16:51 mistich1 why in parallel
16:51 semiosis redragon_: dd is not a good benchmark tool.  but if you must, then use bs=1M
16:51 l0uis redragon_: replica count?
16:51 redragon_ 4
16:51 andreask joined #gluster
16:51 l0uis redragon_: your max write bandwith is then 1 gigabit/second / 4
16:52 semiosis mistich1: why not mutliple?
16:52 semiosis s/multiple/parallel/
16:52 l0uis redragon_: which is 32 MB/second on a 1gig network w/o overhead
16:52 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
16:52 semiosis glusterbot: thx
16:52 glusterbot semiosis: you're welcome
16:52 l0uis lol
16:53 l0uis redragon_: so i'd say you're doing pretty good
16:53 semiosis i really dont like it when glusterbot tries to impress me with its history of *one thousand* messages
16:53 redragon_ thanks l0uis
17:00 mistich1 sorry semiosis doing 100 things at once
17:02 shubhendu joined #gluster
17:04 failshell joined #gluster
17:12 abradley is glusterfs-client preferrable to nfs or do they do the same?
17:12 redragon_ l0uis, so if I move to 10Gige I should see approximately a 10x increase (yea i understand tcp overhead and such)
17:12 redragon_ abradley, personally I like glusterfs-client because it avoids the typical NFS overhead
17:13 redragon_ but its very much dependant on what your doing
17:14 abradley if I'm trying to use a gluster volume as smb shares on a network then I can either mount the volume with glusterfs-client and then bouce it out as an smb share or I can mount the volume with nfs in windows and "share" from there
17:14 abradley either better or worse?
17:16 glusterbot New news from newglusterbugs: [Bug 987555] Glusterfs ports conflict with qemu live migration <http://goo.gl/SbL8x>
17:20 mooperd_ joined #gluster
17:22 ThatGraemeGuy_ joined #gluster
17:22 l0uis redragon_: you should see an increase, but how much is unknwon to me. mistich1 is seeing about 3.5gbit/second right now and trying to get more.
17:24 l0uis abradley: you might want to read the manual. section 6.3 "Using CIFS to Mount Volumes"
17:25 abradley thanks, doing now
17:26 abradley "automatically mounting" --- the manual says "Click Browse , select the volume to map to the network drive, and click OK"
17:27 abradley but it doesn't say if you need to look for a glusterfs-client host or just one of the glusterfs-servers
17:28 l0uis abradley: based on what you know about mounting w/ the gluster client and nfs, what would you think its tlaking about? :)
17:30 l0uis abradley: it reommends you use samba to export it
17:30 l0uis abradley: so you must mount it first w/ the client. then use samba
17:30 abradley gotcha. THat was my understanding. THanks.
17:34 sprachgenerator joined #gluster
17:44 Technicool joined #gluster
17:46 glusterbot New news from newglusterbugs: [Bug 892808] [FEAT] Bring subdirectory mount option with native client <http://goo.gl/wpcU0>
17:58 shubhendu joined #gluster
18:03 phil__ joined #gluster
18:10 PatNarciso Happy Wednesday everyone.
18:11 jclift left #gluster
18:15 kanagaraj joined #gluster
18:17 sprachgenerator joined #gluster
18:28 PatNarciso Fellas -- I wrote a blog related to my upcoming Gluster setup.  I'd appreciate ya checking it out, and giving me any feedback you can.  http://technology.patnarciso.com/2013/10​/09/glusterfs-considerations-revision-1/
18:28 glusterbot <http://goo.gl/RPk5zx> (at technology.patnarciso.com)
18:37 an joined #gluster
18:41 vpshastry joined #gluster
18:43 AliRezaTaleghani joined #gluster
18:43 AliRezaTaleghani what dose it mean?
18:43 AliRezaTaleghani Server and Client lk-version numbers are not same, reopening the fds
18:43 glusterbot AliRezaTaleghani: This is normal behavior and can safely be ignored.
18:45 vpshastry left #gluster
18:49 davinder joined #gluster
18:49 rwheeler joined #gluster
18:54 purpleidea PatNarciso: i don't recommend this setup. i think i mentioned this to you on an earlier day. the same reasons apply.
18:56 LoudNoises joined #gluster
18:58 PatNarciso very similar, yes -- I thought perhaps I had modified it enough to be back within reasonable realm.
19:00 SpeeR joined #gluster
19:01 purpleidea PatNarciso: personally, i don't think so. but if your testing shows this works for your scenario, then i can't argue with that, but i wouldn't design it that way.
19:01 purpleidea (if you add 'purpleidea' in the message somewhere, i'm more likely to see it and be able to respond in a helpful way)
19:02 l0uis PatNarciso: how fast are those links between locations?
19:02 purpleidea l0uis: 100mb/10mb
19:02 PatNarciso l0uis,  10mbps
19:02 purpleidea and no idea how bad latency is "{
19:02 purpleidea :P
19:03 l0uis yeah
19:03 PatNarciso anywhere between 35ms to 90ms
19:03 PatNarciso purpleidea, testing hasn't started yet.  in my opinion, if it doesn't have the #gluster blessing, it will never be tested.
19:03 l0uis PatNarciso: the SSDs are used for what? what role are they playing?
19:05 purpleidea PatNarciso: it doesn't have my blessing. but i'm just one person. everyone has different ideas. i don't think gluster is intended to be used the way you want to use it with the limitations and requirements you have
19:05 PatNarciso l0uis, perhaps I misunderstood one of the translators -- what I wanted to achieve was a machine that allowed fast access to each of the connecting clients.  as the most recent files are often the most important -- I thought it would be possible to configure the translator to favor newer files.
19:06 PatNarciso as the ssds have limited space, the role of those machines would simply to serve the local lan/samba clients.
19:06 PatNarciso purpleidea, I appreciate that.
19:07 purpleidea PatNarciso: np. if you write up what you want to do, clearly and in detail, ping me and i'll try and suggest and architecture for you, whether it involves gluster or not
19:07 PatNarciso purpleidea, is there another ballpark solution you might recommend I look into?
19:08 purpleidea PatNarciso: you're kind of close to being an x y problem atm.
19:08 l0uis purpleidea: what's your objection to the setup, in a nutshell?
19:09 PatNarciso shared inexpensive storage over multiple locations.
19:11 purpleidea l0uis: [s?]he's trying to do geo replication with gluster normal distribute/replicate over 3 different sites with poor inter host bandwidth and latency. unless i misunderstood the description.
19:11 purpleidea PatNarciso: am i right to understand it's one single gluster pool, or three separate ones?
19:11 PatNarciso purpleidea, thats right on.
19:11 PatNarciso single pool.
19:12 l0uis ok i missed that
19:12 l0uis i thought you had gluster servers in 1 location
19:12 l0uis and clients in 2 locations mounting it
19:12 purpleidea l0uis: :P
19:12 purpleidea PatNarciso: you need an engineer/architect
19:12 PatNarciso indeed.
19:13 purpleidea PatNarciso: i'm just trying to give you good advice about what will / won't work. not trying to diss your setup for no reason.
19:13 JoeJulian What I see you needing is a collaborative tool that uses an object store for video editing. Check out the video, edit it locally, check it back in.
19:14 JoeJulian Then the object store could be S3, Swift, Swift on Gluster, whatever.
19:14 PatNarciso purpleidea, seriously, I appreciate all of this.  thank you.  keep it coming :)
19:14 purpleidea PatNarciso: do you understand _why_ the current setup won't work ?
19:14 purpleidea (or won't work well anyways)
19:17 PatNarciso a few ideas come to mind -- related to latency of response from the other gluster servers.  which I thought perhaps I was being slick about by placing the full contents of pull locally at "locationA".
19:18 purpleidea PatNarciso: a single gluster pool usually works best on the same "switch".
19:18 PatNarciso so, I believe the reason why this setup wouldn't perform well is related to the overhead of cross gluster servers/nodes.
19:18 JoeJulian It all comes down to ,,(Joe's performance metric), and I suspect that your users will not be pleased with write speed to the remote replica.
19:18 glusterbot nobody complains.
19:20 PatNarciso same switch.  right.  and in my setup-- that's not gonna happen.
19:20 purpleidea PatNarciso: right
19:20 purpleidea it's an easy way to reason about it
19:21 purpleidea PatNarciso: you probably want some fast reliable storage locally where you do work, and some additional nodes off site to synchronize data to...
19:21 purpleidea s/nodes/hosts
19:21 l0uis PatNarciso: are location A & B working on the same videos?
19:21 PatNarciso l0uis, never the same video.  locking isn't an issue I'm concerned with.
19:21 * JoeJulian blows on his keyboard... "is this thing on?"
19:22 purpleidea JoeJulian: quit hitting the keys so hard
19:22 l0uis PatNarciso: well, more than just 'at the same time', but are they working on the same stuff in general.
19:22 l0uis PatNarciso: i.e., if location A only had access to files at location A, and vice versa for B (ignoring HA for the moment)
19:23 PatNarciso "A" produces a video that "B" needs to broadcast, usually within 6-8 hours time frame.
19:23 l0uis k
19:24 purpleidea sounds like someone has to get fancy and write software to manage all this, or get down and dirty and be clever with rsync
19:24 l0uis is that the standard workflow? does B ever produce a video tha A needs to prodcast?
19:24 PatNarciso "B" broadcasts the videos within their live show, and produce a file -- "air check", or a big .mov of the live show.  "A" needs that "air check" as soon as possible so they can cut it up, make promos for social media, etc.
19:25 l0uis ok got it
19:25 l0uis put a gluster cluster at location A
19:25 purpleidea PatNarciso: two volumes, one syncs a->b, the other syncs b->a . done. whether those are independent gluster pools or just simple RAID is up to you.
19:26 badone joined #gluster
19:27 l0uis hm how big are the files?
19:28 PatNarciso l0uis, A->B, about 5x1GB files.    B->A about 9GB.   I'll get you the exacts if you'd like.
19:28 l0uis no no need.
19:29 PatNarciso purpleidea, what are you thinking the sync management would be, rsync?
19:29 l0uis i would have 1 gluster cluster at A
19:29 purpleidea PatNarciso: ya
19:29 m0zes_ joined #gluster
19:29 l0uis the folks doing the eiting get a simple work flow, which includes, "when the file is ready for B, copy it to the outgoing dir"
19:29 l0uis write a simple shell script that uses inotify to initiate an rsync of filesas they show up in the outgoing dir. rsync the files to a server at location B.
19:30 l0uis hvae a similar script at B that rsyncs the 9GB file back to A into an incoming dir
19:30 l0uis no need to rsync every hour, just write a script that uses inotify. should eliminate the delay
19:30 l0uis for disaster recovery do georep to location C
19:30 aliguori joined #gluster
19:30 purpleidea PatNarciso: what l0uis is saying is mostly the right thing to do.
19:30 purpleidea enough cooks for now. i'm out
19:31 l0uis adios
19:31 satheesh1 joined #gluster
19:31 PatNarciso purpleidea, thanks again
19:31 purpleidea np
19:31 l0uis PatNarciso: design your gluster cluster around a standard unit of scale
19:32 l0uis so that your bricks are the same size as you grow the cluster
19:33 PatNarciso hmm.  I'm curious-- why is that?
19:34 l0uis PatNarciso: http://gluster.org/pipermail/glus​ter-users/2012-March/009827.html
19:34 glusterbot <http://goo.gl/mWtViw> (at gluster.org)
19:36 PatNarciso "problems with small bricks filling up before larger bricks resulting in "device full" errors even when there was plenty of space left in the volume"
19:38 PatNarciso got it.  alright.
19:38 * PatNarciso is rethinking all of this.
19:38 l0uis PatNarciso: You might also consider not using locaiton C at all, and just using a gluster cluster at A and using aws + s3 + glacier for disaster recovery
19:38 l0uis you'd need to do the matht o see what's more cost effective etc.
19:39 l0uis but that is essentially what we do for our setup
19:40 PatNarciso last time I did the AWS math, having locationC was the ideal way to go.  Since then, AWS came out with glacier.  I should run the math on that really quick.
19:41 l0uis yeah galcier makes it a lot cheaper
19:41 l0uis .01/gb/mo
19:42 PatNarciso so 12TB is $123 a month.
19:42 l0uis nod
19:43 PatNarciso 3TB seagate is about $125 on amazon.  low powered pc with usb3.0 hub is $200.
19:44 * l0uis doesn't think skimping on DR is a good idea
19:44 l0uis :)
19:44 l0uis i.e., you need to essentialy replicate the gluster cluster
19:45 PatNarciso right.  5x3TB drives :)  j/k of course.
19:46 * redragon_ has 4x 3TB drives
19:46 redragon_ in each of his mirror nodes
19:46 redragon_ in a raid 0
19:47 dbruhn Nothing says stress like dealing with those DR situations you only kind of half planned for!
19:47 redragon_ or didn't plan for heh
19:47 dbruhn Aka, I agree with 10uis, $123 is nothing to get stuff offsite.
19:48 PatNarciso dbruhn, I totally agree.
19:49 dbruhn PatNarciso? I do work for an online backup provider though, so? I really am the other end of the pendulum.
19:50 PatNarciso l0uis, you've been a great help today.   Thank you very much.
19:50 dbruhn I hate having to deal with someone who wasn't backing up something because it wasn't important and then having them upset because they weren't backing it up.
19:51 l0uis PatNarciso: np
19:52 PatNarciso dbruhn, I'd imagine, while it's not ideal, that a bunch of USB3.0 drives in a RAID5 configuration could get a small company pretty far.
19:54 * l0uis begins to twitch
19:55 JoeJulian Just make sure you duct tape those enclosures to the pc so they don't get knocked off.
19:55 JoeJulian Maybe wrap the whole thing in bailing wire while you're at it.
19:57 PatNarciso nah, duct tape holds in too much heat.  we want those drives to last as long as possible ya know.
19:57 JoeJulian The bailing wire adds heat dispersal.
20:01 PatNarciso so, perhaps tomorrows blog will be: how to build the largest storage capacity gluster node under $500 USD.
20:02 PatNarciso which is double my budget.  I just said $500 so I didn't look poor.  :)
20:03 JoeJulian :)
20:10 PatNarciso JoeJulian, thanks again for your help today.
20:13 andreask joined #gluster
20:30 khushildep_ joined #gluster
21:06 badone joined #gluster
21:24 YazzY joined #gluster
21:24 YazzY hi guys
21:25 YazzY I'm testing gluster with two briks of different size in one volume set up in replication.
21:25 YazzY I just filled up one of the gluster nodes and i can still copy files from a client to the share
21:26 YazzY does that mean one of the server nodes which still has space left will take all the writing now and the one which is full will just stay there until i delete files?
21:26 YazzY i'm on gluster 3.4.1
21:27 l0uis YazzY: Bricks of diff size can cause issues. http://gluster.org/pipermail/glus​ter-users/2012-March/009827.html
21:27 glusterbot <http://goo.gl/mWtViw> (at gluster.org)
21:27 YazzY l0uis: that's one year old...
21:27 YazzY 1.5 actually
21:28 YazzY much can have happened since then
21:28 YazzY and much actually did since 3.2.4
21:30 YazzY anyway, i'm just playing around, it's not a production env.
21:31 YazzY l0uis: all the references from your link are pointing to 404, these url's are wrong
21:31 YazzY ah, i can fix them
21:31 JoeJulian It can work. If you fill up a brick and then try to append to a file on the full brick, that can fail. And, of course, it's very inefficient since all the misplaced files will need to create sticky pointers on the full brick.
21:32 JoeJulian More info on DHT and how having a full brick can be inefficient: http://joejulian.name/blog​/dht-misses-are-expensive/
21:32 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
21:33 JoeJulian wow... that's almost a year old? Man time flies.
21:34 YazzY thanks guys
21:35 YazzY is any of you using KVM together with gluster with live migration of the guests?
21:36 YazzY JoeJulian: hey, i've already been reading your blog about stripes. Cool stuff!
21:37 JoeJulian I am using kvm with gluster to allow live migration. Just mount the directory in the same place on both servers. Nothing to it.
21:37 JoeJulian Glad it's helpful.
21:38 YazzY JoeJulian: do you use it together with pacemaker ?
21:39 JoeJulian No, I'm using openstack and puppet.
21:40 YazzY JoeJulian: on which distro if i may ask ?
21:40 failshell joined #gluster
21:40 JoeJulian CentOS 6.4
21:41 YazzY JoeJulian: does openstack support automatic guest migration when one of the hosts is down ?
21:43 JoeJulian If the "compute node" is down, the "instance" will be either "shutdown" or "error" so I just have puppet start it again in that case.
21:44 JoeJulian It's pretty hacky right now. I'm rewriting the puppet module(s) to be cleaner and worthy of publication.
21:44 YazzY JoeJulian: what happens in a split-brain situation when the guest VM runs on two hosts on a gluster mounted file system ?
21:45 JoeJulian That would suck...
21:45 JoeJulian I count on openstack to manage that.
21:45 YazzY yeah, i think it can happen with your puppetized solution...
21:46 YazzY puppet not keeping up and switching a host on/off fast enough
21:46 JoeJulian I think qemu locks the image file, so that shouldn't be a problem.... not sure though.
21:46 JoeJulian Like I said, it's pretty hacky.
21:47 JoeJulian I just had to get something done so I could go out of town without any major worries.
21:48 YazzY i'll see if i can make it work with pacemaker so it keeps everything under controll (along with glusterfs services)
21:48 JoeJulian I wouldn't manage glusterfs with pacemaker. I would use the built in quorum control.
21:50 YazzY i already enabled that but cluster would just start services and make sure they're not down in case of an issue
21:50 YazzY it's also good at spamming me when the cluster has problems :)
21:51 YazzY but yeah, i have quorum set to 51%
21:52 YazzY that's for internal gluster communication to figure out when a brick is feeling bad
21:53 JoeJulian There's also server quorum
21:54 dbruhn JoeJulian is quorum on 3.3.1?
21:54 JoeJulian dblack: volume quorum is, but server quorum came with 3.4.
21:54 JoeJulian gah, sorry dblack, dbruhn
21:54 dbruhn all good
21:54 dbruhn what is the difference between volume and server quorum
21:55 JoeJulian Oh, it's just Dustin. I can disturb him.
21:56 JoeJulian volume will stop writes to the volume if the client loses a replica brick quorum. Server quorum shuts down the server if it loses quorum with the other servers.
21:56 YazzY JoeJulian: like a STONITH ?
21:57 JoeJulian More like suicide.
21:57 YazzY the server gets powered off or the services are shut down?
21:58 JoeJulian Services.
21:58 YazzY interesting
21:59 JoeJulian in other words, takes a very strong CP stance.
21:59 JoeJulian er, C stance.
21:59 YazzY there it is indeed http://www.gluster.org/community/documen​tation/index.php/Features/Server-quorum
21:59 glusterbot <http://goo.gl/vrw2D> (at www.gluster.org)
22:02 YazzY tada!
22:02 YazzY gluster> volume set virtual_servers cluster.server-quorum-type server
22:02 YazzY gluster> volume set all cluster.server-quorum-ratio 51%
22:03 YazzY now i only need to pull out the network cable and see how well it works
22:05 YazzY JoeJulian: i'm just not sure which one would be more appropriate to use, the server or the volume quorum
22:05 YazzY one disabling writes to the brick, the other taking the service down
22:05 YazzY it's kida the same end result
22:07 YazzY or will a volume still be avaiable for writes with the server quorum enabled ?
22:11 dblack JoeJulian: disturb away ;)
22:19 haritsu joined #gluster
22:19 YazzY hm
22:20 YazzY with bricks of a different size, i still can write to the mount on a client when one of the bricks is full
22:20 YazzY files are stored on the brick with free space but not on the one which is full
22:20 YazzY but the client sees what's on the full brick
22:21 YazzY this is a strange behaviour
22:22 JoeJulian YazzY: Oh, right... it is PA. I was thinking of it all wrong. The volume remains available because the odd-man-out is down, causing the volume to be "safe" to accept writes as the missing server will self-heal when you've solved it's problem.
22:24 YazzY JoeJulian: that's with the server quorum, right ?
22:32 JoeJulian Yep. That can cause other problems but at the very basic level that does work.
22:43 JoeJulian Weird... I lost connection and the playback's all out of order. :D
22:54 mistich1 joined #gluster
22:58 mistich1 joined #gluster
23:04 tryggvil joined #gluster
23:12 dtyarnell joined #gluster
23:17 atrius` joined #gluster
23:19 Cenbe joined #gluster
23:21 dneary joined #gluster
23:23 haritsu joined #gluster
23:41 dneary joined #gluster
23:43 clong joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary