Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:06 harish joined #gluster
01:08 _pol joined #gluster
01:17 Tangram joined #gluster
01:22 sprachgenerator joined #gluster
01:24 JoeJulian social__: I just use logrotate and "copytruncate". The built-in log-rotation just seems like overkill.
01:26 Shdwdrgn Can someone tell me where the log file location is defined in 3.3.2?  I have a new server build, and no logs are being created under /var/log/glusterfs/
01:27 bala joined #gluster
01:39 kevein joined #gluster
02:22 JoeJulian Shdwdrgn: That should be the default location.
02:26 Shdwdrgn that's what I thought.  I even went so far as to restart the server.  Glusterd is running as root, so perms should be no problem.
02:27 Shdwdrgn FYI I was getting kernel panics when copying large amounts of data under a 2.6.39 kernel.  I recompiled under 3.0.0 and it seems to be solid again.  (just in case anyone else has issues)
02:49 MugginsM joined #gluster
02:56 sgowda joined #gluster
02:59 hagarth joined #gluster
03:00 bharata joined #gluster
03:05 recidive joined #gluster
03:05 raghug joined #gluster
03:06 Tangram joined #gluster
03:18 MugginsM Hi, I'm looking at log rotate stuff, is it safe to send -HUP to the server and client while they're in heavy use?
03:19 MugginsM it won't drop data or cause client errors?
03:33 kshlm joined #gluster
03:35 kkeithley joined #gluster
03:35 zombiejebus joined #gluster
03:37 bharata-rao joined #gluster
04:01 krink joined #gluster
04:06 partner_ joined #gluster
04:08 recidive_ joined #gluster
04:10 lyang01 joined #gluster
04:10 Tangram joined #gluster
04:12 raghug_ joined #gluster
04:13 jag3773 joined #gluster
04:16 badone__ joined #gluster
04:16 jebba1 joined #gluster
04:19 saurabh joined #gluster
04:22 krink joined #gluster
04:26 harish joined #gluster
04:27 MugginsM joined #gluster
04:28 JoeJulian MugginsM: correct
04:28 MugginsM thanks
04:28 JoeJulian MugginsM: But I've been on a roll, lately, repeating that many of use just use copytrucate with logrotate.
04:29 JoeJulian s/many of use/many of us/
04:29 glusterbot JoeJulian: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
04:29 JoeJulian glusterbot: bite me
04:29 JoeJulian Shdwdrgn: selinux maybe?
04:30 Shdwdrgn nope, not using it.  Got a simple install of debian wheezy.
04:30 JoeJulian /var/log mounted read-only?
04:30 psharma joined #gluster
04:30 Shdwdrgn no, other log files are updating normally
04:31 Shdwdrgn also confirmed rsyslog config files are the same between both servers
04:32 JoeJulian lsof?
04:33 Shdwdrgn would think if anything were holding the folder, it would have been cleared after the reboot
04:33 Shdwdrgn are there any checks while compiling to see if the log folder exists?
04:33 JoeJulian shouldn't be
04:34 JoeJulian I mean lsof and see what glusterd has open. Maybe you'll find your logs hiding...
04:34 JoeJulian Once you know where they are, working out why should be relatively easy.
04:35 JoeJulian If not that, then
04:35 JoeJulian strace -f -e trace=open glusterd -N
04:35 JoeJulian Heading to bed. Need to start driving by 4am tomorrow.
04:35 Shdwdrgn no problem, thanks
04:36 CheRi joined #gluster
04:39 dgeevarg joined #gluster
04:41 vpshastry joined #gluster
04:45 fleducquede joined #gluster
04:46 eryc joined #gluster
05:00 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
05:20 MugginsM left #gluster
05:27 vpshastry joined #gluster
05:37 rjoseph joined #gluster
05:42 puebele joined #gluster
05:42 tg2 joined #gluster
05:42 lalatenduM joined #gluster
05:43 lalatenduM joined #gluster
05:49 shireesh joined #gluster
05:49 rgustafs joined #gluster
06:01 glusterbot New news from newglusterbugs: [Bug 985406] Cannot change file permissions from windows client <http://goo.gl/kRe7w>
06:01 shylesh joined #gluster
06:11 raghu joined #gluster
06:14 bulde joined #gluster
06:15 shireesh joined #gluster
06:15 bala joined #gluster
06:28 Recruiter joined #gluster
06:40 ngoswami joined #gluster
06:46 bala joined #gluster
06:48 ekuric joined #gluster
06:49 ricky-ticky joined #gluster
07:01 ctria joined #gluster
07:10 icemax joined #gluster
07:10 icemax Hi guys
07:11 icemax Is "volume remove" delete datas ?
07:12 icemax sry, not "volume remove" but "volume delete" ?
07:15 samppah no it shouldn't delete data from bricks
07:17 bala joined #gluster
07:18 icemax just erasing information about the volume
07:19 icemax but dont touch my data ? :)
07:25 icemax I did it, I can confirmed : "volume delete VOLNAME" do not delete any data
07:27 tjikkun_work joined #gluster
07:32 ClessAlvn joined #gluster
07:33 ujjain joined #gluster
07:45 guigui3 joined #gluster
07:49 msciciel joined #gluster
07:49 msciciel1 joined #gluster
07:50 morsik joined #gluster
07:50 morsik hello there
07:50 msciciel1 jol morsik
07:50 piotrektt joined #gluster
07:50 piotrektt joined #gluster
07:55 msciciel1 is there any way for geo-replication in 3.3 to work correctly ? After connection problem with slave node geo-replication is stopping full resync and i'm not sure if it's working at all ...
07:57 ccha msciciel1: stop geo,then turn off indexing, and start again geo
07:57 _pol joined #gluster
07:57 ccha it will resync it
07:57 ccha It did this for me too
07:59 pkoro joined #gluster
08:11 bala1 joined #gluster
08:12 lkoranda joined #gluster
08:17 mooperd joined #gluster
08:18 romero joined #gluster
08:20 msciciel1 ccha: i know this solution, but it's boring to check manaully geo-replication every day, because geo-replication status is ok but it doesn't mean that is ok and i have check if number of files is increasing and if not then stop/start procedure with disabling/enabling indexing :)
08:31 satheesh joined #gluster
08:35 X3NQ joined #gluster
09:02 puebele1 joined #gluster
09:14 pkoro joined #gluster
09:36 lyang0 joined #gluster
09:56 duerF joined #gluster
10:03 glusterbot New news from newglusterbugs: [Bug 986866] imbalance in data requested and actually read from GlusterFS servers <http://goo.gl/ctqvm>
10:09 rwheeler joined #gluster
10:26 dos joined #gluster
10:26 dos hi
10:26 glusterbot dos: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:27 dos I have just quick question about glusterfs.
10:27 dos I don't have portmap or prcbind on my server.
10:28 dos But according to instruction I should open 111 port on my servers.
10:28 dos This is true?
10:28 dos Or I should install rpcbind first?
10:31 spider_fingers joined #gluster
10:32 rjoseph joined #gluster
11:22 CheRi joined #gluster
11:31 jcsp_ joined #gluster
11:38 bfoster joined #gluster
11:52 bala joined #gluster
12:04 guigui3 joined #gluster
12:08 baoboa joined #gluster
12:08 mooperd joined #gluster
12:18 CheRi joined #gluster
12:20 edward1 joined #gluster
12:21 edward1 joined #gluster
12:21 mooperd_ joined #gluster
12:21 aliguori joined #gluster
12:21 recidive joined #gluster
12:22 ctria joined #gluster
12:23 Debolaz joined #gluster
12:24 harish joined #gluster
12:24 ccha no need to install
12:25 neofob joined #gluster
12:33 ekuric joined #gluster
12:34 ekuric joined #gluster
12:35 robert7811 joined #gluster
12:36 ekuric joined #gluster
12:38 rwheeler joined #gluster
12:38 ekuric joined #gluster
12:41 ccha is it possible to compile 3.4.0 on lucid ?
12:46 shylesh joined #gluster
12:50 rjoseph joined #gluster
12:57 Tangram_ joined #gluster
13:00 Tangram joined #gluster
13:03 wibbl3 joined #gluster
13:10 robert7811 Would anyone know of a possible cause to "gluster: symbol lookup error: gluster: undefined symbol: gf_log_loglevel" on an upgrade from 3.2 on Ubuntu 12.04?
13:11 bennyturns joined #gluster
13:15 robert7811 The gluster 3.4 is up and running but I receive the undefined when I execute "gluster peer status"
13:16 ctria joined #gluster
13:27 thommy_ka joined #gluster
13:27 Peanut I've got gluster-3.4 up and running thanks to the PPA by semiosis (thanks!). I've got two machines, with SSDs as gluster bricks, that will host KVMs. Where can I find information on how to improve the IO performance for the VMs on gluster?
13:33 recidive joined #gluster
13:35 semiosis fyi the 3.4.0 packages in the PPA aren't installing the mounting-glusterfs upstart job :(
13:35 semiosis fstab mounts may not work
13:37 Peanut Thanks for the warning, semiosis. Is that still the failure to lookup localhost, or a separate issue?
13:37 Peanut Oh wait, I should read before typing - clearly a separate issue.
13:37 semiosis yep
13:37 Peanut But the localhost issue is still there, too?
13:38 semiosis i modified the upstart job to block mounting glusterfs until the network interfaces are up which seems to fix that
13:38 semiosis however it only works for one glusterfs line in fstab, if you have more than one extra steps are required (duplicating the blocker for each mount)
13:39 Peanut Except that the upstart stuff doesn't get installed at all?
13:40 semiosis ... in 3.4.0 thats right
13:40 semiosis the 3.3.2 package in the 3.3 ppa uses the new blocker and installs it fine
13:41 semiosis i forked the packaging from debian unstable to make the 3.4 package and i suspect the upgrade from debhelper 7 to 9 is causing the problem
13:41 guigui1 joined #gluster
13:42 semiosis debian packaging is so esoteric i have no idea how any of that stuff works
13:42 semiosis magical incantations
13:42 Peanut Thanks for the warning. At the moment, this is all just test/play hardware, so I do appreciate your warnings. Anything I can do to help?
13:43 Peanut In glusterfs, if I have changes in a replicated filesystem but one of the nodes is down for a reboot - does it resync automatically?
13:44 semiosis since 3.3.0 yes
13:45 ekuric1 joined #gluster
13:45 Peanut Then I'm really glad I decided to skip 3.2.7 (as shipped with Ubuntu-13.04).
13:46 semiosis yeah thats pretty old
13:46 ekuric1 joined #gluster
13:54 vpshastry1 joined #gluster
13:55 Peanut "mountall: Skipping mounting /gluster since Plymouth is not available" - but somehow, mysteriously, /gluster does get mounted anyway, but too late to start the guests on boot.
13:56 rjoseph joined #gluster
14:01 vpshastry1 left #gluster
14:04 glusterbot New news from resolvedglusterbugs: [Bug 812488] Compile error on nlm4 section <http://goo.gl/1L0xO> || [Bug 810561] [FEAT] NFS: Locking on MacOS X client fails <http://goo.gl/ZzDIoT>
14:06 dbruhn Question, with the new QEMU support, what's the best practices for volume type, DHT/Replicated/Striped? I m assuming the same things stand as before but wanted to make sure. DHT is FAST, Replication makes reads faster but writes slower, stripping is only good for giant files? which might actually be a use case here. So? yeah, thoughts?
14:07 semiosis Peanut: plymouth is the boot splash screen, i dont know what that message is all about.  however upstart does do more than one pass over fstab, so probably only skipped glusterfs mount on one of the passes, not the other(s)
14:19 bala joined #gluster
14:22 chirino joined #gluster
14:26 jmeeuwen joined #gluster
14:30 plarsen joined #gluster
14:30 jmeeuwen hello there - i've configured a 2x2 for testing storing an imap spool on glusterfs (i.e. many small files) - when i shut down the two replicas (on purpose) to check out the automatic self-healing on recovery, it seems that the replicas do not actually automatically start healing themselves - is there anything else i need to do / take in to account, like configure something?
14:35 kedmison joined #gluster
14:36 vpshastry joined #gluster
14:38 kedmison I'm trying to upgrade from gluster3.3.2qa4 to gluster3.3.2 (release) but yum update won't move to them and yum install thinks they're already installed.  What is the recommended approach for migrating from 3.3.2qa4 to 3.3.2? (or 3.4 for that matter?)
14:38 ccha what are the steps to change replica 2 to replica 3 ?
14:40 vpshastry left #gluster
14:43 jmeeuwen kedmison, it's all about the RPM version and release numbers in this case...
14:43 jmeeuwen "3.3.2qa4", iirc, is considered a bigger version number than "3.3.2"
14:44 jmeeuwen it should probably have been named "3.3.2-0.1.qa4" or something similar, so that the final release could have been "3.3.2-1"
14:44 jmeeuwen kedmison, would "yum downgrade" accept the other 3.3.2 version?
14:45 ccha I hsould peer probe the new replica server , then volume add-brick the new brick in the current replica
14:45 ccha then the volume will be replica 3 ? and I should rebalance
14:45 jmeeuwen ccha, suppose you have node A and B in replica, i suppose you add-brick replica 3 host:/path/to/brick?
14:46 jmeeuwen i only did an add-brick for distribute so far, to make a 1x2 a 2x2... :/
14:47 kedmison Interesting!  looks like that might work.  In the yum transaction, it says that 3.3.2qa4 will be erased and 3.3.2-2 will be a downgrade.  The erased part has me a bit concerned.  should it?
14:47 jmeeuwen no
14:48 ccha I read this on the guide gluster volume add-brick VOLNAME NEW-BRICK, but at the command reference I read volume add-brick VOLNAME
14:48 ccha [replica N] [stripe N] NEWBRICK1
14:48 ccha so ok
14:48 ccha I will try than
14:49 kedmison ok; I'll give that a try.  thank you!
14:49 jmeeuwen you are both welcome ;-)
14:49 chirino joined #gluster
14:54 18WAD6JHC joined #gluster
15:02 TuxedoMan joined #gluster
15:10 aliguori joined #gluster
15:19 spider_fingers left #gluster
15:22 bala joined #gluster
15:22 _pol joined #gluster
15:26 _pol joined #gluster
15:30 jclift_ joined #gluster
15:32 chirino joined #gluster
15:33 kaptk2 joined #gluster
15:33 satheesh joined #gluster
15:34 jbrooks joined #gluster
15:50 lalatenduM joined #gluster
15:51 pkoro joined #gluster
15:56 krink joined #gluster
15:59 vpshastry joined #gluster
15:59 vpshastry left #gluster
15:59 lpabon joined #gluster
16:02 krink joined #gluster
16:03 krink anyone using zfs (linux native) for a brick filesystem?
16:11 TuxedoMan not I
16:15 jmeeuwen me neither
16:15 ProT-0-TypE joined #gluster
16:29 Technicool joined #gluster
16:30 rwheeler joined #gluster
16:37 bulde joined #gluster
16:42 _pol joined #gluster
16:46 guigui3 joined #gluster
16:51 lpabon_ joined #gluster
17:04 satheesh joined #gluster
17:06 zaitcev joined #gluster
17:08 saurabh joined #gluster
17:13 Technicool joined #gluster
17:14 themadcanudist joined #gluster
17:14 themadcanudist left #gluster
17:20 rcheleguini joined #gluster
17:27 ProT-0-TypE joined #gluster
17:29 aliguori joined #gluster
17:31 zombiejebus joined #gluster
17:34 recidive joined #gluster
17:35 _pol joined #gluster
17:38 johnlocke joined #gluster
17:42 satheesh joined #gluster
17:43 aliguori_ joined #gluster
17:44 johnlocke Hello guys/gals! i have an implementation going on with a Russian guru, with glusterfs, the thing is that there's a few things not working well. we have 3 servers with 3 raid areca cards with 10+2 2TB disks, each one. and one of the servers had a bad backplane, we've changed and when comeback, and try to start the volume says is already started, and when checked status says is not in started
17:44 johnlocke state. so try to start volume and says is laready running. try stopping and says is not in started state... cannot mount, and logs, said: I have a rdma bad connection. I'm using tcp, also with gluster volume info, appears as transport: tcp.... Any Ideas?
17:50 guigui3 joined #gluster
17:53 johnlocke we are using stripe, so the space is like 160tb... all 3 nodes have only 3 samba servers, but I think is necessary more samba because, at least the information that we have is, samba can only have 30 connections and no more. we have 1000vm, machines that write and read data. plus 60 persons who do editing. we need to have good throughput. but first about the problem, saying I am with started
17:53 johnlocke volume and at the same time stopped, does anyone had this experience?
17:57 bdperkin joined #gluster
17:58 skyw joined #gluster
17:59 bulde joined #gluster
17:59 johnlocke Anyone with that problem? volume stopped and saying is started?
18:02 Technicool joined #gluster
18:03 _pol_ joined #gluster
18:08 ProT-0-TypE joined #gluster
18:12 robert7811 joined #gluster
18:15 johnlocke guys is there a place where i can chat or pointme to the right direction about glusterfs? troubleshooting tips? Already read the 20 top questions and make some troubleshooting, we have production information that want to keep. so any help would be appreaciated
18:15 robert7811 Would anyone have any suggestion on this problem? I updated our main node to 3.3.2 on Ubuntu 12.04. I shutdown the daemon and did the full upgrade. I then bounced the box. When the box comes back up the server spikes to 99.8% cpu utilization and brings all access to the gluster to a halt since it is the main node. Would any one know why this may be happening to the main node but when the same upgrade was done on the other nodes none of them had this
18:27 semiosis what do you mean by "main node"?  there is no such thing in gluster terms
18:28 semiosis ,,(glossary)
18:28 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
18:29 Recruiter joined #gluster
18:32 robert7811 Sorry. I have a 2 server configuration with replication between them with 2 bricks. Server 1 (main node) is what we consider the primary connection server (defined mount point for web servers). Server 1 is the instance I upgraded and has the problem. It is kind of like a self healing going on but it was still running after 2.5 hours last night and there is only 500 GB of data on both bricks. We have had a self heal happen before but with a duration o
18:33 semiosis your last message was cut off at "but with a duration of"
18:33 semiosis try breaking your messages up into smaller lines
18:33 semiosis this isn't email :)
18:36 Technicool joined #gluster
18:36 robert7811 Got it. Last part was "but with a duration of 10 minutes tops processing time"
18:37 semiosis which of the ,,(processes) is consuming all the cpu?
18:37 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
18:38 semiosis check the log file for that process... put it on pastie.org or similar if you want
18:40 hateya joined #gluster
18:42 robert7811 glusterd is the one consuming the cpu. The glusterfsd avg 2 - 17 % over 5 minute period. The url you provided appears to be down ( at least for me)
18:43 semiosis interesting, maybe your isp/org is blocking pastie, it works for me
18:43 semiosis github gists are good too
18:44 semiosis ,,(paste)
18:44 glusterbot For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
18:45 ProT-0-TypE joined #gluster
18:50 bdperkin joined #gluster
18:58 failshell joined #gluster
18:59 dialt0ne joined #gluster
19:08 hateya joined #gluster
19:08 failshell in a 2 machines replicated setup, if one of the peers is disconnected for a few days and reconnects, what's going to happen? is it going to sync the missing data on its own? that's on 3.3
19:13 dialt0ne so i'm trying to import a snapshot of a brick from a different set of gluster systems and it's a no-go
19:14 dialt0ne i know it's months later, but it looks like this didn't work http://irclog.perlgeek.de/g​luster/2013-05-02#i_7011177
19:14 glusterbot <http://goo.gl/NtL7s> (at irclog.perlgeek.de)
19:15 dialt0ne mostly, getting this error:
19:15 dialt0ne [2013-07-22 19:11:25.902852] E [posix.c:4119:init] 0-cad-fs1-posix: mismatching volume-id (c0c7f571-9280-4b0e-963b-f3c48a3d262b) received. already is a part of volume 5502df3d-bfb3-4e6b-8e31-94d66c96be74
19:16 dialt0ne is there an easy way to re-create a volume if you just have a snapshot of a brick?
19:16 dialt0ne on a totally different set of servers?
19:18 semiosis path or prefix is already part of a volume
19:19 semiosis path or prefix of it is already part of a volume
19:19 semiosis glusterbot: meh
19:19 glusterbot semiosis: I'm not happy about it either
19:19 semiosis path or a prefix of it is already part of a volume
19:19 glusterbot semiosis: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
19:19 semiosis yay
19:19 semiosis dialt0ne: that ^^
19:20 dialt0ne hmm
19:20 dialt0ne well i was doing that, but i don't think it help me
19:20 dialt0ne i am trying to simulate a recovery on some test systems
19:21 dialt0ne in my production envionment i have a system that has a brick that's been offline for more than 3 weeks and is a few million files out of date
19:22 dialt0ne it's a two node gluster
19:22 dialt0ne if i bring up the broken node, it will kill both nodes so they are unusable
19:22 dialt0ne while it attempts to catch-up
19:23 dialt0ne i'm trying to simulate restoring the 2nd node using a recently taken EBS snapshot
19:23 semiosis you can set cluster.background-self-heal-count to 2
19:24 semiosis also maybe set cluster.data-self-heal-algorithm to full, that works better for some people than the diff alg
19:24 dialt0ne but if i clear the extended attributes it has to do a full self-heal and i'm back to square one
19:25 dialt0ne hm
19:25 semiosis the instruction isn't to clear all xattrs, just one xattr on the brick's top level dir
19:25 dialt0ne ah hm
19:26 skyw joined #gluster
19:29 dialt0ne it's going... heal $VOLNAME info is taking a long time to return
19:34 xdexter joined #gluster
19:35 xdexter Hello, i create a volume replica with 2 nodes, when i will mount a volume in clients, I need to specify one of the two hosts, or for how to mount the cluster?
19:36 semiosis you can use either one as the ,,(mount server)
19:36 glusterbot The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds
19:36 GLHMarmot joined #gluster
19:37 xdexter right, so if I connect to one server and it is unavailable, my volume remains mounted and accessible to the data server 2?
19:37 dialt0ne yes
19:38 dialt0ne you can add "backupvolfile-server" and "fetch-attempts" in fstab too
19:38 semiosis or use ,,(rrdns)
19:38 glusterbot You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
19:39 failshell in a 2 machines replicated setup, if one of the peers is disconnected for a few days and reconnects, what's going to happen? is it going to sync the missing data on its own? that's on 3.3
19:40 failshell i just found out, you can't create volumes, as there's one peer missing
19:40 xdexter nice
19:40 xdexter veri thanks
19:40 xdexter very
19:40 semiosis you're welcome
19:41 failshell semiosis: what about my question, you know about that?
19:41 semiosis failshell: all peers have to be online to make cluster config changes.  if you manage somehow to change cluster config while a peer is offline then it will be ,,(peer rejected) when it returns to service
19:41 glusterbot failshell: I do not know about 'peer rejected', but I do know about these similar topics: 'peer-rejected'
19:41 semiosis ,,(peer-rejected)
19:41 glusterbot http://goo.gl/SmmGEA
19:42 failshell ill get our vmware peeps to get that VM back up ASAP then
19:42 semiosis another broken link :(
19:42 * failshell beats glusterbot around with a large trout
19:42 failshell or whatever mIRC used to say
19:49 dialt0ne hm. where do you set cluster.background-self-heal-count ? http://ur1.ca/eqih1
19:49 glusterbot Title: #27073 Fedora Project Pastebin (at ur1.ca)
19:51 jskinner_ joined #gluster
19:56 xdexter nsupdate need bind instaled?
20:00 failshel_ joined #gluster
20:00 semiosis dialt0ne: gluster volume set cluster.background-self-heal-count 2
20:00 semiosis dialt0ne: gluster volume set $volname cluster.background-self-heal-count 2
20:00 semiosis it's undocumented
20:05 dialt0ne hm.
20:05 dialt0ne it's been set. performance is still a dog :-\
20:05 dialt0ne Options Reconfigured: cluster.background-self-heal-count: 2
20:07 dialt0ne set cluster.data-self-heal-algorithm to full now, see if that's helpful
20:20 xdexter dialt0ne, which you think best, or the RRDNS backupvolfile-server?
20:23 dialt0ne i have limited experience, but i use backupvolfile-server
20:34 xdexter right
20:34 _pol joined #gluster
20:35 dialt0ne i would seriously consider semiosis' suggestion though... he is a senior gluster admin here
20:35 xdexter My client is installed on server1 when he indisponivel the backupvolfile will fire right? and my volume will continue working, ok
20:36 xdexter and when the server1 return, my client will reconnect it?
20:36 dialt0ne both servers have the configuration, so you can talk to either.
20:36 dialt0ne that's why round-robin works
20:37 xdexter right, and as I have to specify the principal? whenever it is in the air is that it will be mounted unit?
20:38 dialt0ne if you're using the fuse client, your clients talk to all members of the cluster all the time
20:38 xdexter ah, right
20:38 dialt0ne but you have to start somewhere. so the first time you do a mount, you need to talk to one of them
20:38 xdexter then no matter who he is connected
20:41 xdexter dialt0ne, do you use AWS?
20:42 dialt0ne yes
20:43 xdexter I have a webserver m1.large and autoscale apps, I create a cluster to serve these files web, ok?
20:43 xdexter you think to use a t1.micro only is this bad?
20:45 dialt0ne i am not sure what your question is
20:45 xdexter i use magento and can mount volume /media
20:48 klaxa left #gluster
20:59 cfeller joined #gluster
21:15 caovc joined #gluster
21:19 caovc can anyobody give me a pointer to accelerate glusterfs storage via a local SSD? i'm running openstack's nova's instance storage on a glusterfs volume, however, it regularly results in heavy disk latency peaks and the vm instance being blocked because it waits for disk io
21:19 caovc i tried looking into group virt examples and i'm running 3.4, however, both didn't really help much
21:23 semiosis caovc: never heard of anyone doing client-side caching for a vm image.  do i understand correctly?  that sounds dangerous
21:31 _pol joined #gluster
21:31 samppah caovc: you probably want something like this https://github.com/facebook/flashcache/ or a raid cache that is able to use ssd as cache
21:31 glusterbot Title: facebook/flashcache · GitHub (at github.com)
21:32 caovc semiosis: yes, and yes
21:33 caovc it's dangerous, i totally agree, the main issue is just that there the write latency is partially unusable high
21:33 caovc some operations take over 20 seconds to finish
21:33 samppah filesystem barriers inside VM seem to cause io wait on other VM aswell
21:34 samppah when using fuse client that is
21:37 caovc semiosis: it seems to me that flashcache, similar to bcache, only supports caching an actual disk device, but not network mounts
21:37 caovc samppah: care to expand on that?
21:41 samppah caovc: i'll try to, just a sec :)
21:42 caovc thanks, much appreciated
21:43 dialt0ne left #gluster
21:45 samppah caovc: ie. there is two VM running on fuse mounted gluster volume.. if VM #1 does lots of writes it will eventually hit filesystem barriers that make sure that data is actually written to disks.. when this happens it seems to also block IO on VM #2
21:45 samppah i'm not completely sure but i think that this is somekinde of limitation in fuse
21:46 caovc sounds a lot to be at least part of what's happening, although the barrier seems to be hit by a single VM on some write operations
21:49 _pol joined #gluster
21:49 StarBeas_ joined #gluster
22:12 _pol joined #gluster
22:55 ke4qqq_ joined #gluster
22:56 bfoster_ joined #gluster
23:01 baoboa joined #gluster
23:01 tg2 joined #gluster
23:01 cfeller joined #gluster
23:13 RicardoSSP joined #gluster
23:13 RicardoSSP joined #gluster
23:32 matiz joined #gluster
23:45 theron joined #gluster
23:48 m0zes_ joined #gluster
23:49 m0zes joined #gluster
23:49 StarBeast joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary