Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:46 Edddgy joined #gluster
01:37 gildub joined #gluster
01:38 koobs joined #gluster
01:47 Edddgy joined #gluster
02:11 cjanbanan joined #gluster
02:27 haomaiwang joined #gluster
02:42 haomai___ joined #gluster
02:42 bala joined #gluster
02:46 kshlm joined #gluster
02:48 Edddgy joined #gluster
03:00 sputnik1_ joined #gluster
03:19 vpshastry joined #gluster
03:20 nbalachandran joined #gluster
03:36 sputnik1_ joined #gluster
03:46 itisravi joined #gluster
03:49 Edddgy joined #gluster
03:51 kanagaraj joined #gluster
03:54 atinmu joined #gluster
04:04 bharata-rao joined #gluster
04:07 RameshN joined #gluster
04:11 cjanbanan joined #gluster
04:12 sputnik1_ joined #gluster
04:14 haomaiwang joined #gluster
04:15 shubhendu|lunch joined #gluster
04:17 sputnik1_ joined #gluster
04:21 deepakcs joined #gluster
04:27 pvh_sa joined #gluster
04:28 kdhananjay joined #gluster
04:29 kshlm joined #gluster
04:29 sputnik1_ joined #gluster
04:30 haomai___ joined #gluster
04:39 shylesh__ joined #gluster
04:40 nishanth joined #gluster
04:42 davinder16 joined #gluster
04:49 Edddgy joined #gluster
04:52 bala joined #gluster
04:54 ndarshan joined #gluster
04:57 ramteid joined #gluster
05:10 hagarth joined #gluster
05:12 spandit joined #gluster
05:18 sputnik1_ joined #gluster
05:26 lalatenduM joined #gluster
05:31 aravindavk joined #gluster
05:32 rastar joined #gluster
05:37 ppai joined #gluster
05:40 rjoseph joined #gluster
05:41 haomaiwa_ joined #gluster
05:47 haomai___ joined #gluster
05:50 Edddgy joined #gluster
05:53 karnan joined #gluster
05:58 prasanth joined #gluster
06:01 [HACKING-TWITTER joined #gluster
06:05 sahina_ joined #gluster
06:07 ekuric joined #gluster
06:13 paul_uk left #gluster
06:13 saurabh joined #gluster
06:18 hagarth joined #gluster
06:30 vpshastry joined #gluster
06:32 ricky-ti1 joined #gluster
06:33 atinmu joined #gluster
06:38 bala joined #gluster
06:51 Edddgy joined #gluster
06:52 saurabh joined #gluster
06:53 cjanbanan joined #gluster
07:01 ctria joined #gluster
07:04 haomaiwa_ joined #gluster
07:05 bala joined #gluster
07:12 sputnik1_ joined #gluster
07:15 sputnik1_ joined #gluster
07:15 LebedevRI joined #gluster
07:17 keytab joined #gluster
07:20 aravindavk joined #gluster
07:26 liquidat joined #gluster
07:33 andreask joined #gluster
07:40 fsimonce joined #gluster
07:40 cjanbanan joined #gluster
07:43 pvh_sa joined #gluster
07:48 meghanam joined #gluster
07:48 monotek joined #gluster
07:51 Edddgy joined #gluster
07:52 keytab joined #gluster
07:56 Philambdo joined #gluster
07:56 XpineX joined #gluster
08:12 cjanbanan joined #gluster
08:19 aravindavk joined #gluster
08:23 haomaiw__ joined #gluster
08:24 [HACKING-TWITTER joined #gluster
08:27 [HACKING-TWITTER joined #gluster
08:30 deepakcs joined #gluster
08:30 [HACKING-TWITTER joined #gluster
08:32 meghanam_ joined #gluster
08:35 hagarth joined #gluster
08:36 cjanbanan joined #gluster
08:36 andreask left #gluster
08:38 sage joined #gluster
08:42 vimal joined #gluster
08:52 Edddgy joined #gluster
08:53 nbalachandran joined #gluster
08:56 Philambdo joined #gluster
09:02 davinder16 joined #gluster
09:04 elico joined #gluster
09:05 bala joined #gluster
09:08 haomaiwa_ joined #gluster
09:09 haomai___ joined #gluster
09:12 cjanbanan joined #gluster
09:18 haomaiwang joined #gluster
09:19 haomaiwang joined #gluster
09:19 RameshN hchiramm
09:19 RameshN hchiramm: ping
09:19 glusterbot RameshN: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:22 hchiramm__ RameshN, pong
09:22 ekuric joined #gluster
09:22 calum_ joined #gluster
09:28 ccha2 joined #gluster
09:31 Rydekull joined #gluster
09:32 fim joined #gluster
09:32 msvbhat joined #gluster
09:32 troj joined #gluster
09:33 FooBar joined #gluster
09:36 aravindavk joined #gluster
09:37 glusterbot New news from newglusterbugs: [Bug 1119209] [RFE] cli command to display volume options <https://bugzilla.redhat.com/show_bug.cgi?id=1119209>
09:37 rastar joined #gluster
09:41 calum_ joined #gluster
09:46 qdk joined #gluster
09:53 Edddgy joined #gluster
10:02 calum_ joined #gluster
10:08 rjoseph joined #gluster
10:16 cjanbanan joined #gluster
10:21 calum_ joined #gluster
10:21 bala2 joined #gluster
10:23 haomai___ joined #gluster
10:26 sahina joined #gluster
10:30 calum_ joined #gluster
10:30 andreask joined #gluster
10:33 giannello joined #gluster
10:34 calum_ joined #gluster
10:36 lalatenduM joined #gluster
10:41 rjoseph joined #gluster
10:46 atinmu joined #gluster
10:49 glusterbot New news from resolvedglusterbugs: [Bug 1107649] glusterd fails to spawn brick , nfs and self-heald processes <https://bugzilla.redhat.com/show_bug.cgi?id=1107649>
10:52 nbalachandran joined #gluster
10:54 Edddgy joined #gluster
10:56 calum_ joined #gluster
10:57 ppai joined #gluster
10:58 gildub joined #gluster
10:59 mbukatov joined #gluster
11:00 cjanbanan joined #gluster
11:03 calum_ joined #gluster
11:11 calum_ joined #gluster
11:17 Lethalman joined #gluster
11:17 Lethalman hi
11:17 glusterbot Lethalman: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:17 Lethalman :S
11:18 Lethalman I have one scsi, is it possible to export this same scsi with a gluster brick with two nodes?
11:18 Lethalman or I need a clustered file system
11:19 Lethalman that is I'd like to ensure that if one server with the brick goes down, the other brick is still accessible
11:20 aravindavk joined #gluster
11:28 cjanbanan joined #gluster
11:34 B21956 joined #gluster
11:36 rastar joined #gluster
11:36 calum_ joined #gluster
11:37 hagarth joined #gluster
11:39 glusterbot New news from newglusterbugs: [Bug 1119256] [glusterd] glusterd crashed when it failed to create geo-rep status file. <https://bugzilla.redhat.com/show_bug.cgi?id=1119256>
11:46 atinmu joined #gluster
11:53 calum_ joined #gluster
11:55 Edddgy joined #gluster
11:56 rastar joined #gluster
11:57 calum_ joined #gluster
12:11 itisravi_ joined #gluster
12:12 ws2k3 joined #gluster
12:13 gildub joined #gluster
12:14 diegows joined #gluster
12:27 hagarth joined #gluster
12:31 kanagaraj joined #gluster
12:34 chirino joined #gluster
12:36 edward1 joined #gluster
12:37 cjanbanan joined #gluster
12:39 ctria joined #gluster
12:39 chirino joined #gluster
12:45 dino82 left #gluster
12:45 B21956 joined #gluster
12:49 rwheeler joined #gluster
12:52 theron joined #gluster
12:52 japuzzo joined #gluster
12:55 julim joined #gluster
12:55 Edddgy joined #gluster
13:01 firemanxbr joined #gluster
13:20 theron joined #gluster
13:25 andreask joined #gluster
13:26 bene2 joined #gluster
13:31 theron joined #gluster
13:33 theron_ joined #gluster
13:38 andreask joined #gluster
13:38 overclk joined #gluster
13:38 sjm joined #gluster
13:40 nshaikh joined #gluster
13:40 overclk hagarth: ping, change http://review.gluster.org/#/c/8260/ has a dependency (http://review.gluster.org/#/c/8275/)
13:40 glusterbot Title: Gerrit Code Review (at review.gluster.org)
13:41 hagarth overclk: checking
13:43 hagarth overclk: merged, nice work!
13:43 overclk hagarth: thanks!
13:51 lmickh joined #gluster
13:56 Edddgy joined #gluster
13:59 cicero ooh ops
14:02 coredump joined #gluster
14:04 cjanbanan joined #gluster
14:05 calum_ joined #gluster
14:12 _Bryan_ joined #gluster
14:17 tdasilva joined #gluster
14:20 theron joined #gluster
14:20 wushudoin joined #gluster
14:27 mortuar joined #gluster
14:31 nishanth joined #gluster
14:34 glusterbot New news from newglusterbugs: [Bug 1119328] Remove libgfapi python example code from glusterfs source <https://bugzilla.redhat.com/show_bug.cgi?id=1119328>
14:36 al joined #gluster
14:46 shubhendu|lunch joined #gluster
14:48 cjanbanan joined #gluster
14:49 vpshastry joined #gluster
14:52 chirino joined #gluster
14:57 Edddgy joined #gluster
14:58 keytab joined #gluster
15:06 doo joined #gluster
15:10 nullck_ joined #gluster
15:11 doo_ joined #gluster
15:11 nullck joined #gluster
15:18 chirino joined #gluster
15:22 mdavidson I'm getting a problem with a gluster 3.3.1 installation (2 bricks replicated). Every few days it hangs a glusterfsd process on both bricks with cpu on the proces in the several hundreds, and self heal seems to be stuck on one directory. It can be fixed by stopping and starting the volume and waiting for the self heal to finish.
15:22 jobewan joined #gluster
15:23 mdavidson I have noticed "[server3_1-fops.c:529:server_mkdir_cbk] 0-gv0-server: 1806799: MKDIR (null) (--) ==> -1 (File exists)" in the brick log at about the same time
15:23 glusterbot mdavidson: ('s karma is now -4
15:23 sputnik1_ joined #gluster
15:26 mdavidson and the directory that is stuck self healing has a lot of subdirectories in it (~1000000). We are planning to change the directory structure, but is there anything else I should lokk at? Is it a problem that may be fixed by 3.4 or 3.5?
15:30 daMaestro joined #gluster
15:30 daMaestro joined #gluster
15:42 cjanbanan joined #gluster
15:45 ramteid joined #gluster
15:57 dtrainor joined #gluster
15:58 Edddgy joined #gluster
16:00 skulker joined #gluster
16:01 _Bryan_ joined #gluster
16:01 bala joined #gluster
16:05 hchiramm_ joined #gluster
16:11 jcsp joined #gluster
16:17 daMaestro joined #gluster
16:20 sputnik1_ joined #gluster
16:20 RameshN joined #gluster
16:21 nbalachandran joined #gluster
16:29 Mo_ joined #gluster
16:31 JoeJulian mdavidson: A million files in a directory is going to suck from the client perspective when reading a directory, but shouldn't make glusterfsd hang. There have been a number of race conditions fixed since the time of 3.3.1, some of which were actually in the kernel. My preference is 3.4.5 (has that been released yet?) and the latest kernel.
16:36 mdavidson Thanks, I'll push for an upgrade (in addition to the directory restructure)
16:37 sonicrose joined #gluster
16:37 sonicrose hi all!  another visit from yours truly, the guy that only ever comes to IRC if something is terribly wrong :p
16:38 sonicrose when using gluster for VHD file over NFS for virtual machine storage, wheres a good place to start in troubleshooting VM hangs
16:39 sonicrose this morning, a handful of VMs were hung.  when i do service glusterd restart on the NFS server the VMs come back to life
16:39 sonicrose one was hung over an hour but sprang back like nothing was ever wrong when i restarted glusterd on that NFS server
16:48 rotbeard joined #gluster
16:54 _dist joined #gluster
16:57 mAd-1 joined #gluster
16:58 Edddgy joined #gluster
17:08 Peter4 joined #gluster
17:14 Peter4 good morning, i experiencing a node that having occasional issue on rotating nfs.log and turns out when i restart glusterfs-server, i found these:
17:14 Peter4 http://pastie.org/9389494
17:14 glusterbot Title: #9389494 - Pastie (at pastie.org)
17:14 Peter4 nfs would came back after couple times of restart but why these happen?
17:16 vpshastry joined #gluster
17:19 chirino joined #gluster
17:24 hchiramm_ joined #gluster
17:29 zerick joined #gluster
17:30 LessSeen_ joined #gluster
17:31 pvh_sa joined #gluster
17:34 glusterbot New news from newglusterbugs: [Bug 1115748] Sparse file healing of VM image files is not healing holes properly <https://bugzilla.redhat.com/show_bug.cgi?id=1115748>
17:35 Edddgy joined #gluster
17:35 igorwidl joined #gluster
17:35 XpineX_ joined #gluster
17:37 daMaestro joined #gluster
17:42 cjanbanan joined #gluster
17:43 doo joined #gluster
17:52 sonicrose W [nfs.c:958:nfs_init_state] 0-nfs: /sbin/rpc.statd not found. Disabling NLM....  could this be related to my last message?
17:53 _Bryan_ joined #gluster
18:03 sonicrose resolved with yum install rpc.statd && chkconfig nfslock on && reboot
18:04 Peter4 mine was E [nlm4.c:2464:nlm4svc_init] 0-nfs-NLM: unable to start rpc.statd
18:05 Peter4 i already have rpc.statd installed
18:05 Peter4 and i m on ubuntu
18:05 Peter4 12.04
18:05 Peter4 gluster 3.5.1
18:06 JoeJulian sonicrose: Are you mounting nfs from localhost?
18:07 Peter4 what do u mean monitoring nfs?
18:07 Peter4 yes we watching localhost tcpdump
18:14 JoeJulian Peter4: What the error on the second line tells you is that the command "/sbin/rpc.statd" returned an error.
18:16 Peter4 hmmm what could that be?
18:16 Peter4 another rpc.statd is running??
18:17 JoeJulian Doesn't look that way. There are commands above that to kill and restart it if it's running and it cannot kill the process in /var/run/rpc.statd.pid
18:18 JoeJulian ... and you don't have any of the warnings from those lines.
18:18 Peter4 right i do not....
18:18 Peter4 this seems happen when the nfs.log rotate
18:20 JoeJulian Change your log rotate to use copytruncate and then don't bother with the postrotate?
18:20 edward1 joined #gluster
18:21 Peter4 this is already my current config
18:21 Peter4 http://pastie.org/9389705
18:21 glusterbot Title: #9389705 - Pastie (at pastie.org)
18:21 Peter4 i already have copytruncate
18:21 vpshastry joined #gluster
18:25 _disty4343 joined #gluster
18:26 JoeJulian I think maybe your screen must be cutting things off after around 40 characters.
18:27 Peter4 nope, that's all on the config
18:31 balacafalata joined #gluster
18:37 cfeller joined #gluster
18:45 JoeJulian Peter4: I mean on your irc client. You totally missed the second half of my suggestion.
18:45 Peter4 opps
18:45 Peter4 what's that? :)
18:45 JoeJulian Change your log rotate to use copytruncate and then don't bother with the postrotate?
18:46 Peter4 take out the postrotate?
18:46 Peter4 meaning no restart on the daemons ?
18:47 JoeJulian There was no restart anyway, but no HUP, right.
18:48 Peter4 would the old long file holding the proc??
18:48 Peter4 s/long/log/
18:48 glusterbot What Peter4 meant to say was: would the old log file holding the proc??
18:50 JoeJulian copytruncate does what it says. It copies the log file, then truncates it (the original, obviously, not the copy). So there's no need to reopen the log files since it's just appending on to the recently truncated one.
18:51 JoeJulian That doesn't work with programs that log stupidly (like Xorg) but for most other programs it works very reliably.
18:52 Peter4 ic
18:52 Peter4 then why the default glusterfs-common has the postrotate ?
18:57 JoeJulian That wasn't my decision.
19:01 Peter4 haha
19:02 Peter4 ok i will comment that postrotate for glusterfs-common
19:02 Peter4 that should include all the gluster logs?
19:02 JoeJulian sure
19:08 Peter4 thanks!!!
19:14 Matthaeus joined #gluster
19:17 daMaestro joined #gluster
19:26 cmtime joined #gluster
19:33 nueces joined #gluster
19:42 cjanbanan joined #gluster
19:48 nueces joined #gluster
20:04 daMaestro joined #gluster
20:06 _dist JoeJulian: you around?
20:08 julim_ joined #gluster
20:13 semiosis _dist: he will be, just leave your message
20:16 cjanbanan joined #gluster
20:21 Matthaeus joined #gluster
20:35 nueces joined #gluster
20:41 B21956 joined #gluster
20:41 m0zes_ joined #gluster
20:42 cjanbanan joined #gluster
20:54 awrbgh joined #gluster
20:54 awrbgh تحذير
20:54 awrbgh warning
20:54 awrbgh you may be  watched
20:54 awrbgh do usa&israel use the internet(facebook,youtube,twitter, chat rooms ..ect)to spy??
20:54 awrbgh do usa&israel use the internet 2 collect informations,,can we call that spying??
20:54 awrbgh do they record&analyse everything we do on the internet,,can they harm you using these informations??
20:54 awrbgh joined #gluster
20:54 awrbgh joined #gluster
20:54 JoeJulian @kickban awrbgh
20:55 awrbgh joined #gluster
20:55 JoeJulian @kban awrbgh
20:55 glusterbot JoeJulian: Error: I need to be at least halfopped to kick or ban someone.
20:55 awrbgh joined #gluster
20:55 glusterbot joined #gluster
20:55 awrbgh joined #gluster
20:56 awrbgh joined #gluster
20:57 semiosis @kban awrbgh
20:57 awrbgh was kicked by glusterbot: semiosis
21:10 Edddgy joined #gluster
21:11 Matthaeus joined #gluster
21:15 pvh_sa joined #gluster
21:37 theron joined #gluster
21:39 dtrainor joined #gluster
21:43 plarsen joined #gluster
21:44 coredump joined #gluster
21:58 chirino joined #gluster
22:33 jobewan joined #gluster
22:40 Peter4 anyone tried VMWare on GlusteR?
22:42 cjanbanan joined #gluster
22:46 Edddgy joined #gluster
22:50 Peter4 what does these error msg means?
22:50 Peter4 http://pastie.org/9390524
22:50 glusterbot Title: #9390524 - Pastie (at pastie.org)
22:54 sonicrose Peter4, could mean that you dont have extended attributes enabled on your filesystem
22:54 sonicrose if using ext4 formatting, you have to mount those disks with the extra mount option user_xattr
22:55 Peter4 i m using xfs
22:55 sonicrose if you're on XFS or ZFS then i dunno they should have xattr on by default
22:56 sonicrose not sure then, the error means that it wasn't able to write the extended attributes to the files on the bricks
22:58 Peter4 ic thanks
22:58 Peter4 i have the attr packages installed and also though xfs has xattr enabled on ubuntu...
22:58 Peter4 how can i tell if xattr is enabled?
23:01 Matthaeus use lsattr and setattr to poke at an attribute on a file.
23:01 Matthaeus If it sticks, it's enabled.
23:05 Peter4 http://pastie.org/9390554
23:05 glusterbot Title: #9390554 - Pastie (at pastie.org)
23:05 Peter4 does it means not enabled?
23:13 Matthaeus That would suggest not enabled.
23:14 Peter4 ok
23:14 Matthaeus http://serverfault.com/questions/324975/lsattr-inappropriate-ioctl-for-device-while-reading-flags
23:14 glusterbot Title: linux - lsattr: Inappropriate ioctl for device While reading flags - Server Fault (at serverfault.com)
23:16 Peter4 maybe i should ask the app owners why they try to do so...
23:24 gildub joined #gluster
23:29 doo joined #gluster
23:42 Matthaeus joined #gluster
23:42 cjanbanan joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary