Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 marvinc test
00:11 mattappe_ joined #gluster
00:13 bgpepi joined #gluster
00:20 mattapp__ joined #gluster
00:21 cominfo joined #gluster
00:32 marvinc joined #gluster
00:40 daMaestro joined #gluster
00:45 mattapp__ joined #gluster
01:09 JoeJulian ~ping-timeout | marvinc
01:09 glusterbot marvinc: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
01:10 JoeJulian marvinc: ... also, stopping a server (like for maintenance) does not incur the timeout since the close process properly terminates the tcp connection. It's only when the tcp connection is not terminated that the ping-timeout comes in to play.
01:15 smasha82_ joined #gluster
01:41 glusterbot New news from resolvedglusterbugs: [Bug 835757] DHT: remove dependency on glusterfsd-mgmt <https://bugzilla.redhat.com/show_bug.cgi?id=835757> || [Bug 770914] glusterd crashes with add-brick (replica to dist-replica) <https://bugzilla.redhat.com/show_bug.cgi?id=770914> || [Bug 772142] glusterd brick-ops hits a deadlock <https://bugzilla.redhat.com/show_bug.cgi?id=772142> || [Bug 766482] No validation for stripe block siz
01:44 marvinc joined #gluster
01:49 glusterbot New news from newglusterbugs: [Bug 959075] dht migration- open not sent on a cached subvol if open done on different fd once cached changes <https://bugzilla.redhat.com/show_bug.cgi?id=959075> || [Bug 978082] auxiliary group permissions fail via kerberized nfs export <https://bugzilla.redhat.com/show_bug.cgi?id=978082> || [Bug 985957] Rebalance memory leak <https://bugzilla.redhat.com/show_bug.cgi?id=985957> || [Bug 9889
01:58 marvinc joined #gluster
02:00 marvinc test
02:15 harish joined #gluster
02:19 kshlm joined #gluster
02:26 SicKTRicKS joined #gluster
02:27 SicKTRicKS got a storage design question for a new glusterfs setup... 2 servers to start with...
02:27 SicKTRicKS both systems will be for a high performance storage subsystem.
02:28 SicKTRicKS question is: is it better to group my disks on each server usign let's say zfs or LVM raid or just export each disk seperatly as it's own brick then build the replication on top of that
02:32 DV joined #gluster
02:54 jag3773 joined #gluster
03:01 bharata-rao joined #gluster
03:16 smellis It's my understanding that you should do what you can to ensure that your underlying storage is fast.  I've never tried the brick per disk setup, only raid 10 under the brick
03:38 mattappe_ joined #gluster
03:49 RameshN joined #gluster
03:54 shyam joined #gluster
04:03 itisravi joined #gluster
04:03 SicKTRicKS smellis: i'm just trying to maximize my storage since i'll be using only fast SSDs on this build... and would it be more efficient in case of hard disk failure since you only need to rebuild a brick/ single drive instead of resilvering the zfs pool or rebuild your raid ? and I see another big plus... you get your reads from the cluster to rebuild the brick....
04:04 SicKTRicKS smellis: and don't affect let's say 5 drives on the same system when you rebuild... only the one affected
04:07 marvinc joined #gluster
04:07 sgowda joined #gluster
04:19 daMaestro joined #gluster
04:21 jyundt joined #gluster
04:27 psyl0n joined #gluster
04:29 ababu joined #gluster
04:31 SicKTRicKS left #gluster
04:36 kshlm joined #gluster
04:38 nshaikh joined #gluster
04:45 MiteshShah joined #gluster
04:48 kanagaraj joined #gluster
04:56 shruti joined #gluster
04:57 saurabh joined #gluster
04:58 sgowda joined #gluster
04:58 sgowda joined #gluster
05:15 rjoseph joined #gluster
05:16 ndarshan joined #gluster
05:21 shylesh joined #gluster
05:22 codex joined #gluster
05:25 mohankumar joined #gluster
05:28 GLHMarmot joined #gluster
05:31 mattapp__ joined #gluster
05:33 dusmant joined #gluster
05:34 aravindavk joined #gluster
05:37 ngoswami joined #gluster
05:49 vpshastry joined #gluster
05:52 bulde joined #gluster
05:56 mistich1 joined #gluster
05:57 mistich1 how do you increase the glusterfsd processes on the server side currently 6 but have a 12 core server
05:59 hagarth joined #gluster
06:03 psharma joined #gluster
06:05 davinder joined #gluster
06:10 mistich1 how do you increase the glusterfsd processes on the server side currently 6 but have a 12 core server
06:15 psyl0n joined #gluster
06:21 anands joined #gluster
06:27 ngoswami joined #gluster
06:30 sgowda joined #gluster
06:33 satheesh joined #gluster
06:35 ricky-ticky joined #gluster
06:46 ndarshan joined #gluster
06:49 MiteshShah joined #gluster
06:50 glusterbot New news from newglusterbugs: [Bug 1036009] Client connection to Gluster daemon stalls <https://bugzilla.redhat.com/show_bug.cgi?id=1036009>
06:52 rjoseph joined #gluster
07:04 krypto joined #gluster
07:30 smasha82 joined #gluster
07:30 smasha82 anyone seen this from a peer probe - State: Accepted peer request (Connected)
07:34 jtux joined #gluster
07:37 FarbrorLeon joined #gluster
07:43 rjoseph joined #gluster
07:43 kevein joined #gluster
07:46 purpleidea mistich1: there's one per "bricK". create more bricks per server.
07:47 purpleidea mistich1: keep in mind there are other things besides cpu you need to look at to choose the right saturation.
07:47 purpleidea smasha82: that means you're peered... it's good
07:51 smasha82 thanks @purlpe
07:51 smasha82 from each node should a peer status show all nodes?
07:51 smasha82 currenlty my second node only shows the primary node
07:52 mistich1 purpleidea thanks
07:52 mistich1 that explains 6
07:52 purpleidea mistich1: np
07:53 purpleidea smasha82: @purlpe won't help me see your response. you need to include my proper nick
07:53 purpleidea smasha82: peer status should show all other nodes that you've probed into your pool.
07:53 smasha82 sorry @purpleidea
07:54 purpleidea smasha82: it's not to be sorry, it's more so that you know so i don't inadvertently ignore your replies. no @needed
07:54 glusterbot New news from resolvedglusterbugs: [Bug 862082] build cleanup <https://bugzilla.redhat.com/show_bug.cgi?id=862082>
08:03 psyl0n joined #gluster
08:07 Dga joined #gluster
08:08 ctria joined #gluster
08:10 VeggieMeat joined #gluster
08:10 RameshN joined #gluster
08:11 franc joined #gluster
08:12 getup- joined #gluster
08:13 mistich1 Im getting high iowait times on the gluster servers with ssd drives which are capabily of more ios then it is doing anything I can change in gluster here is my gluster info http://paste.fedoraproject.org/60085/65767801/
08:13 glusterbot Title: #60085 Fedora Project Pastebin (at paste.fedoraproject.org)
08:13 ngoswami joined #gluster
08:15 smasha82 my primary node shows both nodes in a peer status but my secondary node only shows the primary node and not itself
08:16 calum_ joined #gluster
08:18 ultrabizweb joined #gluster
08:19 mistich1 anyone want to take a shot at my question?
08:22 eseyman joined #gluster
08:28 micu1 joined #gluster
08:39 mgebbe_ joined #gluster
09:01 Staples84 joined #gluster
09:02 ngoswami joined #gluster
09:03 vimal joined #gluster
09:11 andreask joined #gluster
09:19 sgowda joined #gluster
09:43 ndarshan joined #gluster
09:49 pkoro joined #gluster
09:51 psyl0n joined #gluster
10:07 geewiz joined #gluster
10:14 mkzero mistich1: have you tried looking at the statistics from gluster? (e.g. volume top or profiling)
10:15 sahina joined #gluster
10:16 psyl0n joined #gluster
10:18 gdubreui joined #gluster
10:20 brosner joined #gluster
10:24 mistich1 mkzero yes
10:24 DV__ joined #gluster
10:27 davinder joined #gluster
10:31 mkzero mistich1: anything odd? specific ops taking too long, etc.?
10:32 mistich1 removed all settings and starting over adding them back one by one will let you know if I find anything
10:33 harish joined #gluster
10:35 pkliczew_ joined #gluster
10:36 CheRi joined #gluster
10:40 satheesh1 joined #gluster
10:40 tnatasha joined #gluster
10:49 satheesh joined #gluster
10:55 purpleidea mistich1: try using @puppet to get a proper working config
10:55 purpleidea ~puppet | mistich1
10:55 glusterbot mistich1: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
10:57 ngoswami joined #gluster
11:01 satheesh joined #gluster
11:02 zeittunnel joined #gluster
11:06 shylesh joined #gluster
11:06 edward2 joined #gluster
11:12 sahina pkliczew_, ping
11:14 dusmant joined #gluster
11:16 pkliczew joined #gluster
11:25 timothy_ joined #gluster
11:32 lpabon joined #gluster
11:49 getup- joined #gluster
11:51 abyss^ In this case: http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server, can I change hostname in peers? I don't know why, but I have IP instead of hostname in the peer...
11:51 abyss^ I can't use replace-brick
11:51 social abyss^: from another node gluster peer probe with dns
11:52 social abyss^: it is caused by reverse lookup on other peers (imho)
11:54 dusmant joined #gluster
11:59 abyss^ social: I have only two glusters (replica) so I can't check if on other node is OK.
12:00 abyss^ ok, so how to use this tutorial: http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server with other IP (in peers I have IP)?;)
12:00 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
12:01 abyss^ can I just modify file in peers directory and restart one gluster?
12:09 hybrid5121 joined #gluster
12:11 itisravi_ joined #gluster
12:12 clag_ joined #gluster
12:17 CheRi joined #gluster
12:17 anands joined #gluster
12:19 satheesh1 joined #gluster
12:21 glusterbot New news from newglusterbugs: [Bug 1039544] [FEAT] "gluster volume heal info" should list the entries that actually required to be healed. <https://bugzilla.redhat.com/show_bug.cgi?id=1039544>
12:22 ira joined #gluster
12:22 dusmant joined #gluster
12:23 hagarth joined #gluster
12:28 purpleidea abyss^: yes but it's really not recommended
12:28 purpleidea abyss^: or rather, do it at your own risk
12:28 purpleidea if you really know what you're doing
12:29 psyl0n joined #gluster
12:31 psyl0n joined #gluster
12:33 abyss^ purpleidea: OK. Thnx. I just will do server with the same IP:)
12:34 semiosis joined #gluster
12:37 vpshastry left #gluster
12:40 andreask joined #gluster
12:42 sgowda joined #gluster
12:46 clag_ left #gluster
12:46 vpshastry joined #gluster
12:47 vpshastry left #gluster
12:50 dneary joined #gluster
12:53 badone joined #gluster
12:56 shyam joined #gluster
13:15 ctria joined #gluster
13:22 gergnz joined #gluster
13:22 dusmant joined #gluster
13:22 jtux1 joined #gluster
13:22 hagarth1 joined #gluster
13:23 anands joined #gluster
13:29 zeittunnel joined #gluster
13:30 verbins joined #gluster
13:39 ngoswami joined #gluster
13:44 vpshastry joined #gluster
13:55 davidbierce joined #gluster
13:57 hybrid512 joined #gluster
13:57 hybrid512 and that's everything, no more optimisation has been done but with this setting, I have iowaits on the client side and at the server side, gluster processes are eating much cpu and performances are bad
13:58 hybrid512 I'd like to serve web files so mainly read only often actions and few writes
13:58 hybrid512 what would be your recommended settings for that scenario ?
13:58 hybrid512 (NFS mount is not an option)
13:58 vpshastry left #gluster
14:00 hybrid512 I read documentation about the performance translator, it seems there are a few settings that could be well used noticeably the Translator performance/io-cache which looks fine for my use but the documentation is talking about a "client side" and a "server side".
14:01 hybrid512 On the server side, I see where to set this up but on the client side, I have no glusterfs configuration file except the mount options in the fstab file, so how can I set those values for the client ??
14:02 vikumar joined #gluster
14:03 sac`away` joined #gluster
14:04 sac`awa`` joined #gluster
14:04 vikumar__ joined #gluster
14:06 ctria joined #gluster
14:11 shyam joined #gluster
14:15 davidbierce joined #gluster
14:18 dusmant joined #gluster
14:21 anands joined #gluster
14:26 mohankumar joined #gluster
14:27 bennyturns joined #gluster
14:34 dbruhn joined #gluster
14:35 msvbhat bennyturns: ping
14:40 bsaggy joined #gluster
14:43 ocholetras joined #gluster
14:43 ocholetras Hi people!
14:44 ocholetras Do you know if gluster can be used to serve user directories? i have seen that quota feature is not so functional for that
14:44 ocholetras -_-
14:53 davinder joined #gluster
14:54 h4idz joined #gluster
14:58 morsik joined #gluster
14:58 morsik hi
14:58 glusterbot morsik: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:58 morsik oh god.
14:59 morsik well
14:59 morsik ...
14:59 morsik it's possible to stop volumne only on one brick?
14:59 geewiz joined #gluster
14:59 morsik somehow i have problem with one vol on one specific brick, and i don't want to reboot that machine, because some things are mounted via nfs (don't ask... ;f)
15:00 morsik afaik 'gluster volume stop <vol>' stops volume on all bricks, i am right?
15:00 ocholetras yea
15:00 ocholetras do you have replica bricks?
15:00 morsik yep
15:00 ocholetras ok
15:01 ocholetras how many bricks do you have on your volume?
15:01 morsik so it's no problem for me to stop one of part
15:01 gmcwhistler joined #gluster
15:01 morsik 6 (2 split x 3 replicas in every split)
15:02 ocholetras i have tried to kill  the gluster process (on a testing infrastructure) that serves 1 of the 2 brikcs on a replica and nothing happens. Just restart gluster and it detects that needs to heal..
15:02 ocholetras so in some way you can stop a replica part but not an split part..
15:03 ocholetras that is the point
15:03 dusmant joined #gluster
15:03 morsik hm. but does /etc/init.d/glusterd restart won't kill other glusterd volumes?
15:04 morsik but doesn't ↑ kill*
15:04 keytab joined #gluster
15:04 ocholetras ummmm
15:06 ocholetras http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
15:06 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
15:06 ocholetras have a look
15:06 morsik ohh
15:06 morsik i'm not sure if this is what i'm looking for.
15:07 ccha3 joined #gluster
15:07 wgao_ joined #gluster
15:07 compbio joined #gluster
15:09 wgao_ joined #gluster
15:09 basic` joined #gluster
15:11 hybrid5121 joined #gluster
15:23 kaptk2 joined #gluster
15:23 mattappe_ joined #gluster
15:24 wushudoin joined #gluster
15:24 hybrid5121 joined #gluster
15:28 _BryanHm_ joined #gluster
15:32 dneary joined #gluster
15:41 Technicool joined #gluster
15:42 ndk joined #gluster
15:43 mattapp__ joined #gluster
15:43 saurabh joined #gluster
15:48 ngoswami joined #gluster
15:55 mattapp__ joined #gluster
15:57 mattappe_ joined #gluster
16:00 matta____ joined #gluster
16:03 sarkis joined #gluster
16:04 mattapp__ joined #gluster
16:06 daMaestro joined #gluster
16:07 bala joined #gluster
16:07 shyam joined #gluster
16:09 daMaestro|isBack joined #gluster
16:12 daMaestro|isBack joined #gluster
16:15 hagarth joined #gluster
16:16 vpshastry joined #gluster
16:16 vpshastry left #gluster
16:23 MrNaviPacho joined #gluster
16:24 bulde joined #gluster
16:24 REdOG joined #gluster
16:27 TvL2386 joined #gluster
16:31 lpabon joined #gluster
16:32 jag3773 joined #gluster
16:33 zerick joined #gluster
16:36 daMaestro joined #gluster
16:40 andreask joined #gluster
16:40 JoeJulian morsik: stopping glusterd does not stop the brick ,,(processes). If you want to just stop one brick, just kill the glusterfsd process for that brick. You can start that brick again with "gluster volume start $vol force"
16:41 glusterbot morsik: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more
16:41 glusterbot information.
16:41 natgeorg joined #gluster
16:43 bulde joined #gluster
16:44 samu60 joined #gluster
16:44 samu60 hi all
16:44 samu60 we've tried to upgrade a 3.3.0 gluster version to 3.4.X
16:45 samu60 and found out some issues
16:45 samu60 we've got a replicatedxdistributedxstripped
16:45 samu60 volume
16:45 samu60 and when we've upgraded the gluster native client to 3.4.1, 3.4.0, 3.4.2qa2
16:45 samu60 the volume is not usable
16:46 samu60 an error about unable to retrieve strip size appears on the logs
16:46 samu60 i've read that 3.3 and 3.4 are compatible versions but we've been unable to upgrade gluster native client to a running 3.3.0 working volume
16:47 samu60 can anyone provide any link or information about how compatible are 3.3 and 3.4 version are?
16:49 jbd1 joined #gluster
16:49 nage joined #gluster
16:50 jag3773 joined #gluster
16:52 glusterbot New news from newglusterbugs: [Bug 1039643] mount option "backupvolfile-server" causes mount to fail with "Invalid argument" <https://bugzilla.redhat.com/show_bug.cgi?id=1039643>
16:59 hybrid5121 joined #gluster
17:12 FarbrorLeon joined #gluster
17:19 ccha3 joined #gluster
17:24 Mo__ joined #gluster
17:31 johnbot11 joined #gluster
17:36 B21956 joined #gluster
17:39 LoudNoises joined #gluster
17:46 semiosis @seen gr6
17:46 glusterbot semiosis: gr6 was last seen in #gluster 2 days, 5 hours, 52 minutes, and 19 seconds ago: <Gr6> I am considering releasing 3.4.2 in the second half of November. Please feel free to propose patches/bugs for inclusion in 3.4.2 here:
17:47 samppah :/
17:47 semiosis samppah: ping
17:47 semiosis <samppah> semiosis: is it possible to include this in current ubuntu packages aswell?
17:47 samppah pong
17:47 semiosis include what?
17:47 Gilbs1 joined #gluster
17:48 B21956 joined #gluster
17:48 samppah semiosis: my bad.. patch that makes it possible to set which is the first usable port for gluster
17:49 semiosis my policy is to only make packages of the official release tarballs, except in rare cases where the tarball doesnt compile, then i will add the most minimal patch
17:49 samppah okay
17:49 semiosis but also have a standing offer to help anyone build their own packages with whatever patches they need
17:51 semiosis anyone know the status of 3.4.2?
17:51 Gilbs1 Anyone have issues stopping a geo-replication sessoin?  Everything I try I end up with: "geo-replication command failed"  When starting the session it gave me the error but it started anyway.  Now I can't stop it unless I kill the PID.
17:52 johnbot11 joined #gluster
17:53 johnbot11 joined #gluster
18:00 bdperkin joined #gluster
18:12 timothy joined #gluster
18:12 semiosis Gilbs1: check log files for additional details
18:12 semiosis feel free to pastie.org the logs
18:15 Gilbs1 I took a look at the geo-rep log file but didn't see anything when it gives the the error.  Is there a debug mode I could put it in?
18:15 semiosis check other logs
18:15 semiosis cli log & glusterd log
18:16 Gilbs1 gotcha
18:17 badone joined #gluster
18:20 FarbrorLeon joined #gluster
18:21 FarbrorLeon joined #gluster
18:42 FarbrorLeon joined #gluster
18:48 sticky_afk joined #gluster
18:48 stickyboy joined #gluster
18:49 psyl0n joined #gluster
18:50 kaptk2 joined #gluster
18:56 sticky_afk joined #gluster
18:57 stickyboy joined #gluster
18:57 bsaggy_ joined #gluster
19:01 sroy_ joined #gluster
19:24 zaitcev joined #gluster
19:25 sticky_afk joined #gluster
19:25 stickyboy joined #gluster
19:33 bsaggy_ joined #gluster
19:43 Alpinist joined #gluster
19:43 cfeller joined #gluster
19:44 MacWinner joined #gluster
20:01 Gilbs1 Call me crazy, I don't see anything in the logs for: geo-replication command failed     Funny part was when I started it, it gave me the error, but still started the replication.  Now it's stuck on. :/
20:02 FarbrorLeon joined #gluster
20:17 ctria joined #gluster
20:44 bgpepi joined #gluster
20:48 sticky_afk joined #gluster
20:48 stickyboy joined #gluster
20:50 harish joined #gluster
20:54 FarbrorLeon joined #gluster
21:05 badone joined #gluster
21:12 bgpepi joined #gluster
21:13 edong23 joined #gluster
21:14 morsik JoeJulian: mhm, thanks for info. anyway it was problem with replication and self-heal tried to copy the same files all the time.
21:14 morsik and killing it didn't helped
21:14 morsik we've found some solution (need to test) about stopping volume, copying files and cleaning xattrs
21:42 gdubreui joined #gluster
21:47 natgeorg joined #gluster
21:47 natgeorg joined #gluster
21:53 nage__ joined #gluster
21:53 nage joined #gluster
22:05 FarbrorLeon joined #gluster
22:05 psyl0n joined #gluster
22:05 JoeJulian @brick order
22:05 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
22:09 japuzzo joined #gluster
22:14 psyl0n joined #gluster
22:17 rotbeard joined #gluster
22:22 Gilbs1 left #gluster
22:44 johnbot11 left #gluster
22:47 gtobon joined #gluster
22:50 gdubreui joined #gluster
22:50 mattappe_ joined #gluster
22:53 gdubreui joined #gluster
22:55 mattappe_ joined #gluster
22:56 mattapp__ joined #gluster
22:57 psyl0n joined #gluster
22:59 natgeorg joined #gluster
22:59 natgeorg joined #gluster
23:12 mattapp__ joined #gluster
23:35 JonnyNomad joined #gluster
23:38 mattapp__ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary