Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 Sunghost quit
00:38 jbd1 joined #gluster
00:39 plarsen joined #gluster
00:40 bala joined #gluster
00:41 primechuck joined #gluster
00:43 primechu_ joined #gluster
01:15 mjsmith2 joined #gluster
01:24 harish joined #gluster
01:27 gdubreui joined #gluster
01:27 gdubreui joined #gluster
01:28 aviksil joined #gluster
01:29 Ark joined #gluster
01:34 badone joined #gluster
01:45 gdubreui joined #gluster
01:55 Kins joined #gluster
01:57 jwww joined #gluster
02:00 wgao joined #gluster
02:04 mjsmith2 joined #gluster
02:08 Ark joined #gluster
02:11 theron joined #gluster
02:18 sauce joined #gluster
02:27 DV joined #gluster
02:28 ubungu joined #gluster
02:29 ubungu halo guys
02:29 ubungu How can I remove an dead brick in Distributed Replicated
02:34 theron_ joined #gluster
02:42 MugginsO joined #gluster
02:45 ceiphas_ joined #gluster
02:55 sks joined #gluster
03:06 bharata-rao joined #gluster
03:13 dusmant joined #gluster
03:13 yinyin joined #gluster
03:23 john3213 joined #gluster
03:28 john3213 left #gluster
03:33 shylesh__ joined #gluster
03:33 an joined #gluster
03:38 kanagaraj joined #gluster
03:45 shubhendu joined #gluster
03:45 gmcwhist_ joined #gluster
03:49 kshlm joined #gluster
03:49 RameshN joined #gluster
03:57 davinder joined #gluster
03:58 RameshN_ joined #gluster
04:00 rastar joined #gluster
04:11 hchiramm_ joined #gluster
04:18 theron joined #gluster
04:21 aviksil joined #gluster
04:23 ppai joined #gluster
04:28 sahina joined #gluster
04:31 DV__ joined #gluster
04:36 yinyin joined #gluster
04:39 zerick joined #gluster
04:40 ramteid joined #gluster
04:41 davinder joined #gluster
04:49 ndarshan joined #gluster
04:49 DV__ joined #gluster
04:50 itisravi joined #gluster
04:50 mjsmith2 joined #gluster
04:51 kdhananjay joined #gluster
04:51 bharata-rao joined #gluster
04:52 dcherednik_ joined #gluster
04:53 meghanam joined #gluster
04:54 nated joined #gluster
04:56 hagarth joined #gluster
05:04 mjsmith2 joined #gluster
05:05 nated joined #gluster
05:05 mjsmith2_ joined #gluster
05:06 aravindavk joined #gluster
05:07 mjsmith__ joined #gluster
05:13 rjoseph joined #gluster
05:13 prasanthp joined #gluster
05:19 sputnik13 joined #gluster
05:19 haomaiwa_ joined #gluster
05:25 sputnik13 joined #gluster
05:29 saurabh joined #gluster
05:31 Ark joined #gluster
05:37 DV__ joined #gluster
05:38 dcherednik joined #gluster
05:38 glusterbot New news from newglusterbugs: [Bug 1086748] Add documentation for the Feature: AFR CLI enhancements <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086748>
05:41 sputnik1_ joined #gluster
05:43 vimal joined #gluster
05:45 nshaikh joined #gluster
05:46 ravindran1 joined #gluster
05:47 kanagaraj joined #gluster
05:48 rjoseph joined #gluster
05:49 Philambdo joined #gluster
05:53 hagarth joined #gluster
06:00 vpshastry joined #gluster
06:05 rahulcs joined #gluster
06:06 mjsmith2 joined #gluster
06:09 glusterbot New news from newglusterbugs: [Bug 1098025] Disconnects of peer and brick is logged while snapshot creations were in progress during IO <https://bugzilla.redhat.co​m/show_bug.cgi?id=1098025>
06:33 bharata-rao joined #gluster
06:33 DV joined #gluster
06:34 ktosiek joined #gluster
06:34 dcherednik joined #gluster
06:36 ricky-ti1 joined #gluster
06:47 TvL2386 joined #gluster
06:53 fidevo joined #gluster
06:56 DV__ joined #gluster
06:56 ktosiek is there a list of all the release notes somewher? I'll be upgrading 3.2.6 -> 3.5, and I want to review expected problems
06:57 ktosiek hmm, or I'll just setup 3.5 and rsync the data, the files are only removed on demand anyway...
07:01 ctria joined #gluster
07:03 DV joined #gluster
07:06 mjsmith2 joined #gluster
07:06 Slashman joined #gluster
07:08 mjsmith2_ joined #gluster
07:08 prasanthp joined #gluster
07:09 psharma joined #gluster
07:11 DV__ joined #gluster
07:14 dusmant joined #gluster
07:14 prasanthp joined #gluster
07:17 prasanthp joined #gluster
07:17 tjikkun_work joined #gluster
07:18 ProT-0-TypE joined #gluster
07:27 sonicrose joined #gluster
07:28 sonicrose hi all.  does anyone in here do paid consulting re: glusterfs?
07:28 sonicrose irc is always my last resort when i'm stuck and i'm really stuck on this
07:29 sonicrose my gluster crashes when i write to a volume with stripe.  seems ok without stripe, but extremely slow
07:30 bala1 joined #gluster
07:31 sonicrose i'd be willing to pay someone who knows gluster well enough to figure out what's happening and how i can improve things
07:33 ndevos sonicrose: I'm not sure ,,(stripe) is what you want, but well, it should definitely not crash
07:33 glusterbot sonicrose: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
07:34 codex joined #gluster
07:35 ndevos sonicrose: I think there is a donate button on that blog, if it helps you, I'm sure JoeJulian would appreciate a bitcoin or something :)
07:36 sonicrose :)
07:36 sonicrose well i'm not making a stripe only volume
07:37 sonicrose i've got 2 servers with 9 4TB drives in each
07:37 sonicrose i just want to be able to lose a whole server and everything stays up
07:38 nishanth joined #gluster
07:38 sonicrose i've tried distribute-replica and it works OK, but slower than I hoped
07:38 sonicrose if i do replica only it works too
07:38 hagarth joined #gluster
07:38 ndevos sonicrose: if there is a specific workload or event (like a storage server reboot) that triggers the crashes?
07:39 sonicrose but once i make a volume either stripe-replica or stripe-distribute-replica nfs service crashes  as soon as i write to the volume
07:39 ndevos sonicrose: what exactly crashes, the glusterfs mountpoint, the brick processes or something else?
07:39 sonicrose its probably not nfs related actually since if i mount with gluster it disconnects me too
07:39 sonicrose brick processes seem OK
07:40 fsimonce joined #gluster
07:40 sonicrose when it crashes i can tell since gluster volume status says NFS is not running on the particular server i was accessing
07:40 ndevos I'm not sure, but I think any stripe-<combination> is not very often used, and therefor not much tested
07:40 sonicrose and that server makes a /core.xxxx file
07:41 sonicrose ah, interesting.  i assumed replica + stripe would be pretty common
07:41 ndevos sonicrose: you probably should file a bug for that, and if you can, attach the core.$pid file there, explain the volume layout, and include the logs
07:41 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
07:42 ndevos sonicrose: no, replica+distribute is the common setup, stripe is not practical for backup/restore because the files on the bricks are not 'normal' files anymore
07:43 sonicrose should it be ok to have a glusterfs server that has no bricks on it, that is just used as an NFS target ?
07:43 sonicrose to access volumes on other servers in the cluster
07:44 ndevos sonicrose: the common usecase for stripe only (no replica) is for workloads that generate big files, like rendering videos, the loss of a striped volume is not critical, the sources can ger rendered again
07:44 ndevos sonicrose: yeah, you can do that, just add a new server to the trusted pool, and that can be used as an nfs server
07:45 andreask joined #gluster
07:46 sonicrose ok that's been my approach.  i have virtualization hosts that support NFS but not gluster mounts
07:46 sonicrose but they can run gluster-server
07:46 sonicrose so i tell the virtual host cluster to connect to 127.0.0.1:/volumename
07:47 ngoswami joined #gluster
07:47 sonicrose and it seems to work ok in distribute mode
07:47 sonicrose so maybe if i forget about stripe i can stay in replica-distribute
07:48 sonicrose i'm just concerned over the write speeds then... i can only seem to get it to muster like 30MB/sec which doesn't make sense with 9xSAS2 drives and 10GbE
07:48 ndevos yes, it does not sound like stripe is what you should use in this case - it's just not well tested for this kind of workloads
07:48 liquidat joined #gluster
07:50 ndevos hmm, indeed that sounds as if it should be able to go faster, but I'm no performance expert...
07:50 ndevos how do you test things?
07:50 sonicrose i need to hire a performance expert :p
07:50 ndevos hehe :)
07:51 sonicrose i'm testing by running a VM that has it's VHD file stored on a gluster volume, then using dd within the VM
07:51 sonicrose if i specify oflag=direct i only get like 25MB/sec
07:51 sonicrose i get the same results without the hypervisor though
07:52 sonicrose i tried from a seperate non virtual client, using gluster mounts and nfs mounts.   the gluster mounts seemed to go must faster than nfs
07:54 ndevos that glusterfs is faster than nfs is probably expected, glusterfs writes to the bricks directly (over the network), writing to nfs passes the network twice ( -> nfs -> bricks )
07:55 ndevos but remember that gluster is intended to be scalable, it may well be that multiple VMs can write with that speed at the same time...
08:06 mjsmith2 joined #gluster
08:09 glusterbot New news from newglusterbugs: [Bug 1092433] DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092433>
08:22 DV joined #gluster
08:23 nshaikh joined #gluster
08:23 raghu joined #gluster
08:31 glusterbot New news from resolvedglusterbugs: [Bug 1086753] Add documentation for the Feature: Preventing NFS restart on volume change <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086753>
08:39 glusterbot New news from newglusterbugs: [Bug 1086743] Add documentation for the Feature: RDMA-connection manager (RDMA-CM) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086743> || [Bug 1086782] Add documentation for the Feature: oVirt 3.2 integration <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086782>
08:44 dcherednik joined #gluster
08:52 rgustafs joined #gluster
08:55 borreman joined #gluster
08:56 olisch joined #gluster
09:04 borreman_dk joined #gluster
09:07 qdk joined #gluster
09:09 borreman_dk Hi guys, two node gluster - replica 2. One node is offline - gluster volume status all only shows the healthy part of the volume. Any way to see failed/offline bricks? Ver is 3.4.2
09:09 morse joined #gluster
09:22 ndevos borreman_dk: I'm not sure, but that may be visible in 'gluster volume status' in 3.5...
09:31 borreman_dk ok, will try 3.5 when its back up. How do you people normally discover when something goes offline?
09:31 kkeithley_ joined #gluster
09:33 kkeithley_ left #gluster
09:47 ninkotech joined #gluster
09:47 ninkotech_ joined #gluster
09:52 nshaikh joined #gluster
09:52 jmarley joined #gluster
09:52 jmarley joined #gluster
09:56 ravindran1 joined #gluster
09:56 ravindran1 left #gluster
09:58 rejy joined #gluster
09:59 ndevos borreman_dk: I think most people have tools to watch logs and all
10:06 mjsmith2 joined #gluster
10:16 jwww Hello.
10:16 glusterbot jwww: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:20 tryggvil joined #gluster
10:26 borreman_dk ndevos: I have tools and all :), but I do find it odd the gluster console isn't more aware of offline bricks.
10:31 teemupo joined #gluster
10:37 harish joined #gluster
10:42 an joined #gluster
10:46 ndevos borreman_dk: I think the ovirt console displays offline bricks too, but I have not looked into it very much
10:54 ppai joined #gluster
11:02 jmarley joined #gluster
11:07 rahulcs joined #gluster
11:10 rahulcs joined #gluster
11:14 gdubreui joined #gluster
11:15 burn420 joined #gluster
11:19 rahulcs joined #gluster
11:19 aviksil joined #gluster
11:20 ppai joined #gluster
11:21 an joined #gluster
11:21 yinyin joined #gluster
11:21 rahulcs joined #gluster
11:24 vpshastry1 joined #gluster
11:27 rahulcs joined #gluster
11:28 primechuck joined #gluster
11:30 DV joined #gluster
11:30 rahulcs joined #gluster
11:32 ubungu left #gluster
11:34 Slashman joined #gluster
11:38 diegows joined #gluster
11:51 andreask joined #gluster
11:59 eseyman joined #gluster
12:03 an joined #gluster
12:05 sroy_ joined #gluster
12:06 mjsmith2 joined #gluster
12:11 RameshN_ joined #gluster
12:12 RameshN joined #gluster
12:14 rahulcs joined #gluster
12:17 foster joined #gluster
12:22 kkeithley_ joined #gluster
12:28 ppai joined #gluster
12:32 glusterbot New news from resolvedglusterbugs: [Bug 994876] Dist-geo-rep : geo rep status should show detail info. for recent sync performance in files/sec, bytes/sec. <https://bugzilla.redhat.com/show_bug.cgi?id=994876>
12:36 davinder joined #gluster
12:36 kdhananjay joined #gluster
12:37 mjsmith2 joined #gluster
12:40 glusterbot New news from newglusterbugs: [Bug 798546] [glusterfs-3.3.0qa24]: create set command for md-cache & remove stat-prefetch set <https://bugzilla.redhat.com/show_bug.cgi?id=798546> || [Bug 864503] gluster volume geo-replication config --xml outputs wrong xml structure <https://bugzilla.redhat.com/show_bug.cgi?id=864503> || [Bug 864509] gluster volume geo-replication config key value --xml outputs wrong xml structure <https://
12:48 jag3773 joined #gluster
12:54 Neeper joined #gluster
12:54 Neeper hey o/
12:56 Neeper been trying to setup a replica 2 gluster cluster and following the getting started guide on the site
12:56 Neeper everything seems to be working on, volume info and peer status return everything is ok, but files I write to the mount are only replicated to node1 but not node2
12:58 Neeper also time required to mount the volume seems to be in the "minutes" category, is this expected behaviour?
12:58 rahulcs joined #gluster
12:59 Neeper there is no obvious errors in the logs folder to indicate a problem replicating
12:59 bennyturns joined #gluster
13:01 jmarley joined #gluster
13:01 jmarley joined #gluster
13:09 theron joined #gluster
13:11 bala1 joined #gluster
13:14 plarsen joined #gluster
13:22 primechuck joined #gluster
13:35 rahulcs joined #gluster
13:36 sjm joined #gluster
13:37 rahulcs joined #gluster
13:39 rahulcs_ joined #gluster
13:39 m0zes Neeper: how did you mount the volume?
13:43 Neeper m0zes: mount -t glusterfs node1:/gv0 /mnt/test
13:43 Neeper I think i've solved the synching problem actually
13:44 Neeper found it talking on a port not listed in the docs
13:44 m0zes okay. I've seen people in the past write directly to the bricks and they saw replication delays like you seemed to be.
13:46 mjsmith2 joined #gluster
13:49 coredump joined #gluster
13:49 nshaikh joined #gluster
13:54 diegows joined #gluster
14:00 ghenry_ joined #gluster
14:01 ghenry joined #gluster
14:05 B21956 joined #gluster
14:07 P0w3r3d joined #gluster
14:10 a2_ joined #gluster
14:17 LoudNoises joined #gluster
14:27 jskinner_ joined #gluster
14:28 japuzzo joined #gluster
14:30 Ark joined #gluster
14:34 ndk` joined #gluster
14:35 haomaiwang joined #gluster
14:42 kkeithley1 joined #gluster
14:42 jiffe98 gluster> volume heal WEB info
14:42 jiffe98 and its stuck, not doing anything
14:42 jiffe98 grr
14:42 olisch joined #gluster
14:45 kkeithley_ left #gluster
14:45 scuttle_ joined #gluster
14:47 kkeithley_ joined #gluster
14:47 liquidat joined #gluster
14:47 rahulcs joined #gluster
15:01 jwww I'm using gluster 3.3 on two server , with 4 bricks on each. how to prevent replica to be on 2 bricks on the same server ?
15:07 ricky-ticky1 joined #gluster
15:09 rahulcs joined #gluster
15:11 daMaestro joined #gluster
15:15 cvdyoung Hi,
15:15 cvdyoung What does most companies do for backups?
15:17 jwww Hi cvdyoung . I use bacula ( http://www.bacula.org )
15:19 cvdyoung Thanks, we are building a replicated environment with another set of systems in a distributed setup for just our backups that we would run nightly.  I was curious how other companies/people manage their gluster backups.  Appreciate the info
15:21 ndevos cvdyoung: geo-replications is one way to make backups of your gluster volumes
15:22 ricky-ticky joined #gluster
15:29 jiffe98 rsync?
15:31 dbruhn joined #gluster
15:32 kkeithley1 joined #gluster
15:32 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
15:36 vpshastry joined #gluster
15:51 pdrakewe_ joined #gluster
15:51 sprachgenerator joined #gluster
15:52 aviksil joined #gluster
15:59 Gino_Semseo joined #gluster
15:59 Gino_Semseo Hello.... someone listening? ;)
16:02 Gino_Semseo I have a problem I cannot find anything useful at documentation.... it is a matter of initializing a glusterfs cluster along with 50Gb of existing files...
16:04 Gino_Semseo If I try to use the glusterfs-client, along with a mounted brick, to sync media, the replication is too slow at the beginning... I am sure maybe there is a method to start the glusterfs cluster with all this Gigabytes of files inside at the beginning...
16:04 Gino_Semseo could it be?
16:04 Gino_Semseo (thanks... i need a little help here)
16:08 swebb joined #gluster
16:10 jag3773 joined #gluster
16:12 jbd1 joined #gluster
16:15 chirino joined #gluster
16:19 pk1 joined #gluster
16:19 pk1 kkeithley_: ping
16:19 glusterbot pk1: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
16:19 ndevos :P
16:20 pk1 ndevos: :-)
16:20 pk1 ndevos: I have been trying to find kkeithley_. Where is he :-(
16:23 ndevos pk1: he mostly is around here somewhere, maybe lunchtime?
16:23 pk1 ndevos: Hmmm... I will wait for a bit...
16:23 avati joined #gluster
16:25 rahulcs joined #gluster
16:25 ndevos pk1: maybe kkeithley (without the _) is available?
16:25 ndevos he seems to be here twice....
16:26 ndevos oh, hello nixpanic_ !
16:29 k3rmat_ joined #gluster
16:29 vpshastry left #gluster
16:34 k3rmat_ I have gluster running on a server that is also running NFS to export other filesystems. I've changed the nfs.port number to 2410 on my gluster volume so I could run them side by side, however I am having issues mounting gluster using NFS. However, I can mount the volume using glusterfs... I've tried specifying port, protocol, and nfs version 3 in my mount options with no luck... any advice?
16:43 ndevos k3rmat_: you can only run one NFS-server (Linux kernel or Gluster/NFS) at the same time on a server
16:45 Gino_Semseo Hi.. could some one help? If I try to use the glusterfs-client, along with a mounted brick, to sync media, the replication is too slow at the beginning (50Gbs)... I am sure maybe there is a method to start the glusterfs cluster with all this Gigabytes of files inside at the beginning...
16:45 Gino_Semseo is it possible?
16:49 bala joined #gluster
16:50 bala joined #gluster
16:52 k3rmat_ ndevos: Does it have to do with ports?
16:53 k3rmat_ because I can start both daemons without error after changing the tcp.port for glusterfs
16:54 ndevos k3rmat_: it has to do with the portmapper (rpcbind) that registers an nfs-server (and related services), it can register only one instance of a particular service
16:56 giannello joined #gluster
16:57 sjusthome joined #gluster
17:05 jobewan joined #gluster
17:11 teemupo Hi, I'm running a benchmark against a Striped-Replicated gluster (stripe 7 replica 2) and found out read operations will end up only on master bricks. Is there any way to spread reads on replica bricks as well (doubling the read performance)? My client is connected to gluster using the native glusterfs client.
17:14 samppah teemupo: what version you are using? it should spread reads to other bricks but it's not always even
17:14 samppah btw, are you from nummela? :)
17:14 teemupo samppah: I'm using 3.5
17:15 teemupo I'm from Vantaa :)
17:16 samppah i'm not very familiar with striped setups (haven't tried that at all) so i'm not sure if there is any bugs anything like that
17:18 mjsmith2 joined #gluster
17:22 teemupo the issue is that my gluster nodes are hitting their network i/o limit (running on Amazon m1.xlarge instances) when I'm reading from an instance with 10gig network
17:23 teemupo being able to spread the reads to replicas should help with it, but now they are just idling and letting others do the work
17:25 edward1 joined #gluster
17:26 zaitcev joined #gluster
17:34 sjusthome joined #gluster
17:36 davinder joined #gluster
17:49 mkereczman joined #gluster
17:51 mkereczman left #gluster
18:00 Ark joined #gluster
18:00 hagarth joined #gluster
18:05 hexaclock left #gluster
18:18 maduser joined #gluster
18:18 maduser joined #gluster
18:25 an joined #gluster
19:00 ThatGraemeGuy joined #gluster
19:02 VerboEse joined #gluster
19:08 edward1 left #gluster
19:11 Ark joined #gluster
19:11 ktosiek joined #gluster
19:22 theron_ joined #gluster
19:27 rahulcs joined #gluster
19:33 sroy_ joined #gluster
19:36 edward1 joined #gluster
19:39 chirino joined #gluster
19:56 zerick joined #gluster
19:59 rahulcs joined #gluster
20:01 Ark joined #gluster
20:06 tdasilva joined #gluster
20:10 chirino joined #gluster
20:21 y4m4 joined #gluster
20:21 qdk joined #gluster
20:34 thogue joined #gluster
20:34 theron joined #gluster
20:41 siel joined #gluster
20:44 rotbeard joined #gluster
20:44 badone joined #gluster
20:50 mattappe_ joined #gluster
21:02 Ark joined #gluster
21:02 gdubreui joined #gluster
21:04 mattapperson joined #gluster
21:08 theron joined #gluster
21:09 anotheral how do I get rid of the duplicate directories that show up during a rebalance?
21:16 [o__o] joined #gluster
21:16 mattappe_ joined #gluster
21:17 mattapperson joined #gluster
21:18 sputnik13 joined #gluster
21:22 mattappe_ joined #gluster
21:24 theron joined #gluster
21:26 fyxim_ joined #gluster
21:28 johnmwilliams__ joined #gluster
21:44 JoeJulian There shouldn't be any.
21:46 mattappe_ joined #gluster
21:51 mjsmith2 joined #gluster
22:11 sjm left #gluster
22:12 chirino joined #gluster
22:21 primechuck joined #gluster
22:27 tryggvil joined #gluster
22:27 diegows joined #gluster
22:37 jag3773 joined #gluster
22:50 sjm joined #gluster
22:52 fidevo joined #gluster
23:06 primechuck joined #gluster
23:12 swebb joined #gluster
23:30 theron joined #gluster
23:44 chirino joined #gluster
23:45 nated does that make them derplicates?
23:58 anotheral it makes them a pain in my ass :)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary