Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 jporterfield joined #gluster
00:24 jporterfield joined #gluster
00:40 kevein joined #gluster
00:42 jporterfield joined #gluster
00:52 jporterfield joined #gluster
00:55 mambru joined #gluster
01:00 sprachgenerator joined #gluster
01:19 jporterfield joined #gluster
01:46 jporterfield joined #gluster
02:02 lkthomas joined #gluster
02:02 lkthomas hey guys
02:02 lkthomas does geo-replication is using rsync at all ?
02:05 jporterfield joined #gluster
02:06 robo joined #gluster
02:13 jporterfield joined #gluster
02:25 ncjohnsto yes
02:26 ncjohnsto it used rsync and an app called gsynd
02:44 jporterfield joined #gluster
02:52 lkthomas ncjohnsto: is it the same rsync that we use everyday?
02:57 chjohnst_home yes its the same rsync we use everyday
02:58 chjohnst_home taking advantage of xattr
02:59 lalatenduM joined #gluster
03:02 saurabh joined #gluster
03:08 xymox joined #gluster
03:16 bharata-rao joined #gluster
03:18 shubhendu joined #gluster
03:38 jporterfield joined #gluster
03:38 shylesh joined #gluster
03:47 itisravi joined #gluster
03:54 sgowda joined #gluster
04:02 jporterfield joined #gluster
04:02 john126 joined #gluster
04:06 john126 Hi, I am seeing high CPU usage on a webserver access files via gluster volumes.  I have been looking into the performance options.  If I run the command "gluster volume set vhosts performance.io-thread-count 64" on one of the servers containing a brick, does it update the setting of that translator on the server AND the client, or are there settings I should be changing on the client?  volume server CPU usage is ok, but it
04:06 john126 is the client that is using more CPU for the glusterfs client
04:15 jporterfield joined #gluster
04:18 kshlm joined #gluster
04:23 jporterfield joined #gluster
04:25 spandit joined #gluster
04:26 kanagaraj joined #gluster
04:27 dusmant joined #gluster
04:38 awheeler joined #gluster
04:39 ppai joined #gluster
04:40 ndarshan joined #gluster
04:47 bala joined #gluster
04:56 jporterfield joined #gluster
04:56 CheRi joined #gluster
05:07 psharma joined #gluster
05:12 rjoseph joined #gluster
05:14 hchiramm_ joined #gluster
05:14 jporterfield joined #gluster
05:19 hchiramm_ joined #gluster
05:19 shruti joined #gluster
05:19 bala joined #gluster
05:25 mohankumar__ joined #gluster
05:29 anands joined #gluster
05:30 aravindavk joined #gluster
05:38 RameshN joined #gluster
05:42 hagarth joined #gluster
05:46 vpshastry1 joined #gluster
05:48 ndarshan joined #gluster
05:49 bala joined #gluster
05:49 raghu joined #gluster
05:50 lalatenduM joined #gluster
05:51 lalatenduM joined #gluster
05:51 shubhendu joined #gluster
05:54 CheRi joined #gluster
05:55 kanagaraj joined #gluster
05:55 dusmant joined #gluster
05:56 aravindavk joined #gluster
05:57 rastar joined #gluster
05:59 shruti joined #gluster
06:01 ppai joined #gluster
06:02 nshaikh joined #gluster
06:05 lkthomas chjohnst_home: what's that xattr for again ?
06:09 chjohnst_home lkthomas look at something called xtime
06:10 chjohnst_home http://gluster.org/pipermail/glu​ster-users/2012-May/010429.html
06:10 glusterbot <http://goo.gl/DyISQZ> (at gluster.org)
06:10 chjohnst_home simple explanation of how it works
06:11 vimal joined #gluster
06:11 lkthomas thanks
06:12 lkthomas so basically it adds a "tag" on file to mark it's new
06:14 jtux joined #gluster
06:21 vshankar joined #gluster
06:24 kshlm joined #gluster
06:26 vpshastry1 joined #gluster
06:28 chjohnst_home yes based on an access
06:28 JoeJulian ~php | john126
06:28 glusterbot --negative-timeout=HIGH --fopen-keep-cache
06:28 glusterbot john126: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH
06:30 ppai joined #gluster
06:41 spandit joined #gluster
06:44 davinder joined #gluster
06:56 lkthomas thanks
06:59 tjikkun_work joined #gluster
07:01 ngoswami joined #gluster
07:05 jporterfield joined #gluster
07:06 eseyman joined #gluster
07:09 ricky-ticky joined #gluster
07:14 john126 Thank you, JoeJulian.
07:19 dusmant joined #gluster
07:20 shruti joined #gluster
07:21 shubhendu joined #gluster
07:28 vpshastry1 joined #gluster
07:31 RameshN joined #gluster
07:33 ndarshan joined #gluster
07:33 ctria joined #gluster
07:34 jporterfield joined #gluster
07:34 CheRi joined #gluster
07:37 aravindavk joined #gluster
07:42 andreask joined #gluster
07:43 eseyman joined #gluster
07:45 jporterfield joined #gluster
07:50 glusterbot New news from resolvedglusterbugs: [Bug 980770] GlusterFS native client fails to mount a volume read-only <http://goo.gl/nTFRU>
07:51 jporterfield joined #gluster
07:56 jtux joined #gluster
07:57 jporterfield joined #gluster
08:00 xavih joined #gluster
08:04 shubhendu joined #gluster
08:12 jporterfield joined #gluster
08:13 StarBeast joined #gluster
08:15 vpshastry joined #gluster
08:19 mooperd joined #gluster
08:20 aravindavk joined #gluster
08:24 mgebbe joined #gluster
08:25 bharata-rao joined #gluster
08:29 aravindavk joined #gluster
08:34 mbukatov joined #gluster
08:36 edward1 joined #gluster
08:45 jporterfield joined #gluster
09:00 manik joined #gluster
09:02 jcsp joined #gluster
09:04 hchiramm_ joined #gluster
09:08 glusterbot New news from newglusterbugs: [Bug 1003521] missing status as string in volume status and rebalance/remove-brick status commands <http://goo.gl/v6pZSl>
09:09 ProT-0-TypE joined #gluster
09:10 ProT-0-TypE joined #gluster
09:10 spandit joined #gluster
09:10 ProT-0-TypE joined #gluster
09:10 bharata-rao joined #gluster
09:11 anands joined #gluster
09:11 ProT-0-TypE joined #gluster
09:12 ProT-0-TypE joined #gluster
09:13 ProT-0-TypE joined #gluster
09:16 ahomolya joined #gluster
09:16 kshlm joined #gluster
09:19 RedShift joined #gluster
09:19 mtanner joined #gluster
09:30 ninkotech joined #gluster
09:31 spandit joined #gluster
09:43 ppai joined #gluster
10:15 manik joined #gluster
10:18 anands joined #gluster
10:34 satheesh joined #gluster
10:39 kanagaraj joined #gluster
10:41 Elendrys joined #gluster
10:46 sgowda joined #gluster
10:54 csshankaravadive joined #gluster
10:56 csshankaravadive I am trying to install gluster 3.4. But running into wired problems. Should I always install 3.3 first and then upgrade to 3.4?
10:58 RedShift no
10:59 mtanner_ joined #gluster
10:59 RedShift 3.4 should work out of the box. Which OS are you using?
11:00 csshankaravadive For servers I am using centos 6.0
11:00 satheesh1 joined #gluster
11:00 csshankaravadive and for client I use oracle linux 5.5
11:01 csshankaravadive When I try to install client I get this problem
11:01 csshankaravadive -> Missing Dependency: libglusterfs.so.0()(64bit) is needed by package glusterfs-3.4.0-8.el5.x86_64 (/glusterfs-3.4.0-8.el5.x86_64)
11:04 csshankaravadive or am I trying to install the wrong version of the package
11:04 csshankaravadive this is my distro details
11:04 csshankaravadive Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)
11:04 csshankaravadive Red Hat Enterprise Linux Server release 5.5 (Tikanga)
11:04 csshankaravadive can you suggest me the right package
11:11 kanagaraj_ joined #gluster
11:12 satheesh joined #gluster
11:17 sgowda joined #gluster
11:18 shubhendu joined #gluster
11:22 RameshN joined #gluster
11:24 RedShift csshankaravadive that is a too old OS release
11:24 RedShift you should really upgrade
11:25 csshankaravadive Thanks will try with a newer version
11:25 RedShift what's the full error message?
11:25 RedShift if it complains about missing dependencies, it will automatically search for those dependencies, so that yum message by itself is not enough
11:25 csshankaravadive can you point me to the latest documentation for replacing crashed server
11:25 dusmant joined #gluster
11:26 csshankaravadive http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Brick_​Restoration_-_Replace_Crashed_Server
11:26 glusterbot <http://goo.gl/60uJV> (at gluster.org)
11:31 csshankaravadive I am using this, gluster self heal demon crashes when I start gluster volume heal $VOLUME info
11:33 NuxRo csshankaravadive: are you on 64bit OS?
11:33 csshankaravadive yes
11:35 csshankaravadive and somtimes the info command just ends without showing any output
11:35 csshankaravadive http://gluster.org/community/documentation/index.p​hp/Gluster_3.2:_Triggering_Self-Heal_on_Replicate
11:35 glusterbot <http://goo.gl/BCKDiO> (at gluster.org)
11:35 csshankaravadive this did not help too
11:55 jporterfield joined #gluster
12:01 shubhendu joined #gluster
12:06 RameshN joined #gluster
12:08 ninkotech joined #gluster
12:09 RedShift joined #gluster
12:18 eseyman joined #gluster
12:25 rcheleguini joined #gluster
12:28 jporterfield joined #gluster
12:34 jporterfield joined #gluster
12:35 ryan_t joined #gluster
12:43 piotrektt joined #gluster
12:45 ctria joined #gluster
12:58 vpshastry left #gluster
12:59 satheesh joined #gluster
13:00 jporterfield joined #gluster
13:06 jporterfield joined #gluster
13:14 saurabh joined #gluster
13:28 jporterfield joined #gluster
13:30 vpshastry1 joined #gluster
13:40 tziOm joined #gluster
13:44 hagarth joined #gluster
13:49 DV joined #gluster
13:50 vpshastry joined #gluster
13:52 vpshastry left #gluster
13:52 manik joined #gluster
13:56 shylesh joined #gluster
13:56 Elendrys joined #gluster
14:00 mohankumar__ joined #gluster
14:06 RobertLaptop joined #gluster
14:41 jporterfield joined #gluster
14:42 shylesh joined #gluster
14:44 eseyman joined #gluster
14:48 vpshastry1 joined #gluster
14:51 vpshastry joined #gluster
14:58 semiosis @later tell durzo on vacation so slow to respond.  upstart job moved to -client package, you should still have one.
14:58 glusterbot semiosis: The operation succeeded.
15:00 semiosis @later tell chirino DM me on twitter when you get this so we can sort out time for lunch tomorrow
15:00 glusterbot semiosis: The operation succeeded.
15:02 aravindavk joined #gluster
15:14 manik joined #gluster
15:14 mooperd joined #gluster
15:18 robo joined #gluster
15:21 nightwalk joined #gluster
15:29 hchiramm_ joined #gluster
15:45 nightwalk joined #gluster
15:48 mooperd joined #gluster
15:56 satheesh joined #gluster
15:56 zerick joined #gluster
16:05 jclift joined #gluster
16:14 davinder joined #gluster
16:35 jcsp joined #gluster
16:54 jclift_ joined #gluster
17:21 eightyeight joined #gluster
17:43 andreask joined #gluster
17:54 manik joined #gluster
17:55 lalatenduM joined #gluster
18:18 manik joined #gluster
18:23 RedShift ping
18:26 Guest53741 joined #gluster
18:34 MrNaviPacho joined #gluster
18:34 jclift_ joined #gluster
18:39 awheeler joined #gluster
18:40 Han left #gluster
18:41 RedShift has anyone succesfully used gluster with ESXi?
18:46 daMaestro joined #gluster
18:53 jporterfield joined #gluster
19:00 mooperd joined #gluster
19:09 lalatenduM RedShift, I dont think anybody have used
19:09 lalatenduM RedShift, why not use KVM
19:09 RedShift because everything's vmware here
19:09 JoeJulian Yes, there have been many users reporting successful use with ESXi
19:10 RedShift I'm using it in a replicated setup
19:10 RedShift after some tuning I was able to provide a consistent and non-datalosing failover
19:10 lalatenduM JoeJulian, cool
19:10 lalatenduM RedShift, good to know
19:11 RedShift this is just a test setup, so can't comment on real life
19:11 RedShift but I want to know about other people's experiences with it
19:17 RedShift JoeJulian you know of some people in here?
19:20 JuanBre I just rebooted one of the servers and now when I try to start gluster I get "E [xlator.c:408:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
19:20 JuanBre "
19:22 jporterfield joined #gluster
19:22 JuanBre there is no "management" volume anywhere...
19:26 RedShift did you check the logfile?
19:26 RedShift there is a management volume, it's hidden but it's configured
19:29 lalatenduM JuanBre, did your IP changed after reboot
19:29 JuanBre nop
19:30 lalatenduM JuanBre, ok
19:30 lalatenduM JuanBre, what is Linux distribution u are using?
19:30 JuanBre no change...I just reboot it because there were some logs open that were fulling the root partition
19:30 JuanBre ubuntu-server 12.04
19:30 JuanBre gluster 3.4.0
19:31 lalatenduM JuanBre, check /var/log/glusterfs/etc*
19:31 lalatenduM JuanBre, see if you are seeing some error messages at the end of the etc-glusterfs-glusterd.vol.log  log file
19:32 JuanBre its full of [2013-09-02 19:06:07.665663] E [socket.c:2767:socket_connect] 0-management: connection attempt failed (Connection refused)
19:32 JuanBre [2013-09-02 19:06:10.666626] W [common-utils.c:2300:gf_ports_reserved] 0-glusterfs-socket:  is not a valid port identifier
19:33 lalatenduM JuanBre, looks like a bug, restart your glusterd services on nodes
19:35 JuanBre mmm...its working fine in the other 3 nodes...
19:35 lalatenduM JuanBre, what exactly is failing for you
19:38 JuanBre I rebooted one node...after booting... no gluster services started ...so I tried to run them manually...and the error I get is the one from my first post
19:40 lalatenduM JuanBre, I never seen this issue, you can send a mail to glusterusers describing your issue
19:40 lalatenduM JuanBre, somebody might be able to help you
19:43 JuanBre lalatenduM: thanks anyway ...I think my problem is that the nodes got a 100% usage on the root partition because of gluster logs...
19:46 daMaestro joined #gluster
19:54 jporterfield joined #gluster
20:04 johnmorr joined #gluster
20:19 jporterfield joined #gluster
20:22 robo joined #gluster
20:29 jporterfield joined #gluster
20:34 dkorzhevin joined #gluster
20:34 dkorzhevin joined #gluster
20:41 tziOm joined #gluster
20:50 jporterfield joined #gluster
21:19 badone joined #gluster
21:19 JuanBre just in case someone has the same problem...I had a peer file empty...
21:24 johnmorr joined #gluster
21:41 jporterfield joined #gluster
21:43 dmojoryder is there a downside to using fopen-keep-cache when mounting glusterfs? Seems to really improve perf (and reduce network bandwidth) when repeatedly accessing files over gluster. Kinda almost seems it should be enabled by default
21:47 StarBeas_ joined #gluster
21:51 johnmorr joined #gluster
22:01 nueces joined #gluster
22:10 fidevo joined #gluster
22:13 StarBeast joined #gluster
22:32 _chjohnsthome joined #gluster
22:32 helloadam joined #gluster
22:34 _chjohnsthome left #gluster
22:35 _chjohnsthome joined #gluster
23:15 mika joined #gluster
23:15 mika joined #gluster
23:15 wirewater joined #gluster
23:15 jporterfield joined #gluster
23:26 asias joined #gluster
23:33 jporterfield joined #gluster
23:45 nueces joined #gluster
23:51 jporterfield joined #gluster
23:59 jporterfield joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary