Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 _pol joined #gluster
00:04 theron joined #gluster
00:19 _pol_ joined #gluster
00:19 johnbot11 joined #gluster
00:40 mattappe_ joined #gluster
01:07 13WABKH4U joined #gluster
01:12 yinyin joined #gluster
01:13 _pol__ joined #gluster
01:20 vpshastry joined #gluster
01:22 _pol joined #gluster
01:23 _pol_ joined #gluster
01:24 badone joined #gluster
01:26 bala joined #gluster
01:37 _Bryan_ joined #gluster
01:37 harish joined #gluster
01:45 digimer left #gluster
01:47 B21956 joined #gluster
01:54 raghug joined #gluster
01:57 feixuetuba joined #gluster
01:58 mattapp__ joined #gluster
01:58 feixuetuba hi. I try to create a volume with two nodes. use command "gluster volume create gv0 replica 2 192.168.70.31:/mnt/brick 192.168.70.32:/mnt/brick"
01:59 feixuetuba but
02:00 feixuetuba an error occur "volume create: gv0: failed: Brick: 192.168.70.32:/mnt/brick not available. Brick may be containing or be contained by an existing brick
02:00 feixuetuba 0,00",why?
02:03 mattapp__ joined #gluster
02:13 _pol joined #gluster
02:17 mattapp__ joined #gluster
02:20 _pol joined #gluster
02:22 harish joined #gluster
02:35 bharata-rao joined #gluster
02:36 mattappe_ joined #gluster
02:39 mattappe_ joined #gluster
02:40 mtanner_ joined #gluster
02:44 matta____ joined #gluster
02:46 gdubreui joined #gluster
02:53 _Bryan_ joined #gluster
02:56 diegows joined #gluster
02:57 kshlm joined #gluster
03:16 mattapp__ joined #gluster
03:23 vpshastry joined #gluster
03:30 mattapp__ joined #gluster
03:33 RameshN_ joined #gluster
03:33 glusterbot New news from newglusterbugs: [Bug 1043737] RHS NFS directory access is too slow, impacting customer workflows with ACL <https://bugzilla.redhat.com/show_bug.cgi?id=1043737>
03:36 bharata-rao joined #gluster
03:37 MacWinner joined #gluster
03:41 kanagaraj joined #gluster
03:42 psyl0n joined #gluster
03:43 itisravi joined #gluster
03:58 dusmant joined #gluster
04:31 vpshastry joined #gluster
04:33 DV_ joined #gluster
04:33 ppai joined #gluster
04:39 kdhananjay joined #gluster
04:42 Ramereth joined #gluster
04:44 MiteshShah joined #gluster
04:57 prasanth joined #gluster
04:57 bala joined #gluster
05:03 lalatenduM joined #gluster
05:04 glusterbot New news from newglusterbugs: [Bug 1043737] NFS directory access is too slow with ACL <https://bugzilla.redhat.com/show_bug.cgi?id=1043737>
05:05 saurabh joined #gluster
05:20 CheRi joined #gluster
05:23 aravindavk joined #gluster
05:24 theron joined #gluster
05:30 shubhendu joined #gluster
05:30 shylesh joined #gluster
05:34 vpshastry joined #gluster
05:37 psharma joined #gluster
05:38 bharata-rao joined #gluster
05:43 satheesh joined #gluster
05:44 ababu joined #gluster
05:46 mohankumar joined #gluster
05:50 bulde joined #gluster
05:50 Ramereth joined #gluster
06:00 spandit joined #gluster
06:05 zeittunnel joined #gluster
06:22 hagarth joined #gluster
06:22 kshlm joined #gluster
06:34 theron joined #gluster
06:36 yosafbridge joined #gluster
06:43 meghanam joined #gluster
06:43 meghanam_ joined #gluster
06:44 raghu` joined #gluster
06:44 hagarth joined #gluster
06:48 nshaikh joined #gluster
06:58 anands joined #gluster
07:04 hagarth joined #gluster
07:10 FarbrorLeon joined #gluster
07:11 rastar joined #gluster
07:18 dneary joined #gluster
07:19 jtux joined #gluster
07:23 ricky-ti1 joined #gluster
07:23 satheesh1 joined #gluster
07:32 ekuric joined #gluster
07:36 harish joined #gluster
07:39 ngoswami joined #gluster
07:54 verdurin joined #gluster
08:00 eseyman joined #gluster
08:12 andreask joined #gluster
08:14 ctria joined #gluster
08:18 sekon joined #gluster
08:18 sekon Hello glusterd is segfaulting on centos6.4
08:19 sekon I am using gluster 3.4.1
08:21 ndevos sekon: can you ,,(paste) the last 100 lines of /var/log/glusterfs/etc-glusterfs-glusterd*.log ?
08:21 glusterbot sekon: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
08:26 pk joined #gluster
08:30 sekon ndevos: https://dpaste.de/e2Tp
08:30 glusterbot Title: dpaste.de: Snippet #250842 (at dpaste.de)
08:31 sekon iptables is disabled, so is selinux
08:32 ndevos sekon: that looks really broken: [mem-pool.c:349:mem_get0] (-->/usr/lib64/libglusterfs.so.0(glusterfs_graph_construct+0x3af) [0x7fefc69bcb7f] (-->/usr/lib64/libglusterfs.so.0(yyparse+0xa38) [0x7fefc69bc558] (-->/usr/lib64/libglusterfs.so.0(get_new_dict_full+0x25) [0x7fefc6972085]))) 0-mem-pool: invalid argument
08:32 ndevos sekon: can you check if all the glusterfs* rpms are of the same version/release?
08:33 ndevos sekon: dpaste the outout of: rpm -qa 'glusterfs*'
08:34 ndevos sekon: and also 'rpm -qf /usr/lib64/libglusterfs.so.0' and 'rpm -Vf /usr/lib64/libglusterfs.so.0'
08:42 hybrid512 joined #gluster
08:45 sekon ndevos: please take a look at https://dpaste.de/WYx2
08:45 glusterbot Title: dpaste.de: Snippet #250843 (at dpaste.de)
08:45 sekon I will be right back need to go AFK for a while
08:47 dneary joined #gluster
08:50 ndevos sekon: you have mismatching versions installed: glusterfs-libs-3.4.0.36rhs-1.el6.x86_64 and glusterfs-3.4.1-3.el6.x86_64
08:50 ndevos sekon: you should replace glusterfs-libs-3.4.0.36rhs-1.el6.x86_64 with glusterfs-libs-3.4.1-3.el6.x86_64
08:59 muhh joined #gluster
08:59 edward2 joined #gluster
09:03 tziOm joined #gluster
09:03 mgebbe joined #gluster
09:04 mgebbe_ joined #gluster
09:08 pk joined #gluster
09:10 Technicool joined #gluster
09:10 itisravi joined #gluster
09:17 mohankumar joined #gluster
09:32 pk joined #gluster
09:35 sankey joined #gluster
09:37 atrius joined #gluster
09:40 anands joined #gluster
09:41 tryggvil joined #gluster
09:45 Rydekull joined #gluster
10:00 vpshastry1 joined #gluster
10:02 hagarth joined #gluster
10:03 verdurin_ joined #gluster
10:22 nshaikh joined #gluster
10:35 GabrieleV joined #gluster
10:40 vimal joined #gluster
10:41 diegows joined #gluster
10:45 psyl0n joined #gluster
10:57 calum_ joined #gluster
11:21 meghanam joined #gluster
11:22 meghanam_ joined #gluster
11:30 sekon left #gluster
11:30 atrius joined #gluster
11:55 hagarth joined #gluster
11:57 ProT-0-TypE joined #gluster
12:07 glusterbot New news from newglusterbugs: [Bug 1043886] [RFE] Add two more options in `gluster volume set` command storage.anonuid and storage.anongid <https://bugzilla.redhat.com/show_bug.cgi?id=1043886>
12:08 vpshastry1 joined #gluster
12:10 shyam joined #gluster
12:15 satheesh1 joined #gluster
12:23 stickyboy joined #gluster
12:30 mohankumar joined #gluster
12:31 ira joined #gluster
12:37 dneary joined #gluster
12:47 Norky joined #gluster
12:54 dneary joined #gluster
13:05 lpabon joined #gluster
13:09 Norky joined #gluster
13:09 zeittunnel joined #gluster
13:12 mattappe_ joined #gluster
13:17 kanagaraj joined #gluster
13:20 andreask joined #gluster
13:25 vpshastry joined #gluster
13:27 satheesh1 joined #gluster
13:29 RameshN joined #gluster
13:32 _Bryan_ joined #gluster
13:40 anands joined #gluster
13:41 davidbierce joined #gluster
13:42 davidbie_ joined #gluster
13:47 pk left #gluster
13:49 RameshN joined #gluster
13:54 17SADQCQ9 joined #gluster
13:57 GabrieleV joined #gluster
14:02 B21956 joined #gluster
14:03 achuz joined #gluster
14:05 bennyturns joined #gluster
14:10 zeittunnel joined #gluster
14:22 DV_ joined #gluster
14:28 kanagaraj joined #gluster
14:30 CheRi joined #gluster
14:32 theron joined #gluster
14:32 MiteshShah joined #gluster
14:33 hybrid512 joined #gluster
14:40 social Hmm how does gluster look on fclose without fsync
14:41 dbruhn joined #gluster
14:41 social If I have application that closes file and another node grabs the file it would on local posix fs work fine as the vfs already has file, but posix doesn't solve issue of multiple nodes and network filesystems
14:42 social should gluster call fsync on close as it should in this case reproduce the workflow of posix regardless of multiple nodes or should the applications change to call fsync on close?
14:42 japuzzo joined #gluster
14:43 ndevos I think NFS calls fsync() on close(), and gluster should do that too, IMHO
14:43 bdperkin joined #gluster
14:45 mohankumar joined #gluster
14:45 social ndevos: I think gluster fuse doesn't, we just hit such race where it did fclose and another node didn't see data.
14:47 ndevos social: hmm, I would think that FUSE itself would not do that, local fs's dont need it, so it should be something that glusterfs-fuse explicitly needs to do
14:47 ndevos I dont know if it does it right now, if not, you should file a bug
14:47 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:49 bala joined #gluster
14:49 ira joined #gluster
14:51 coxy82 joined #gluster
14:54 mtanner_ joined #gluster
14:56 dneary joined #gluster
15:03 Norky joined #gluster
15:07 zwu joined #gluster
15:07 mattapp__ joined #gluster
15:09 mattapp__ joined #gluster
15:11 johnmilton joined #gluster
15:12 bennyturns joined #gluster
15:13 bugs_ joined #gluster
15:13 social ndevos: before I'll open bug I'd look into code, can you just point me to something?
15:16 ndevos social: I'd start with fuse_release() in xlators/mount/fuse/src/fuse-bridge.c - the kernel calls release() on fclose() and friends
15:18 johnmilton joined #gluster
15:21 johnmilton joined #gluster
15:24 dneary joined #gluster
15:24 social ndevos: ok I don't see fsync there I guess that fix is just putting fuse_fsync fuse_fsync_resume above fuse_release and call fuse fuse_fsync after GET_STATE in fuse_release ?
15:27 ndevos social: I'm not very familiar with how the fuse operations work, but yes, I think you want to call fuse_fsync() from within fuse_release()
15:29 Technicool joined #gluster
15:33 wushudoin joined #gluster
15:34 flrichar joined #gluster
15:38 glusterbot New news from newglusterbugs: [Bug 1044008] fclose doesn't cause fsync on fuse mount <https://bugzilla.redhat.com/show_bug.cgi?id=1044008>
15:47 semiosis joined #gluster
15:47 semiosis joined #gluster
15:47 zerick joined #gluster
15:49 Norky joined #gluster
15:50 zerick joined #gluster
15:50 mkzero any idea why our debian 7 clients for gluster 3.4.1 have high memory usage? cache is set to 512M and on one of the clients the res size is over 2.5G..
15:53 social mkzero: leak?
15:56 smellis anyone have a problem with a glusterfs pool in libvirt going wonky after pool-refresh?
15:57 smellis I'm getting device busy (which is right because there are VMs running on it) and the pool gets marked as inactive, but it's still mounted
15:57 smellis I have to migrate vms to another host, manually unmount the glusterfs volume and the pool-start it again
15:59 mkzero social: not sure. i have 13 clients, all debian 7, all v3.4.1, but only a few seem to have such a high memory consumption
16:00 mkzero social: but it seems to have something to do with the usage pattern. those with the most memory consumption open the most files(largest one is a client copying the whole volume using rsync)
16:02 kaptk2 joined #gluster
16:03 plarsen joined #gluster
16:03 ndk joined #gluster
16:05 social mkzero: and it's fuse mount process? we have serious traffic on gluster yet not many memleaks in fuse mounts
16:05 japuzzo_ joined #gluster
16:07 mkzero social: yep, fuse mount.
16:08 glusterbot New news from newglusterbugs: [Bug 977497] gluster spamming with E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused) when nfs daemon is off <https://bugzilla.redhat.com/show_bug.cgi?id=977497>
16:08 * social goes to check
16:08 anands joined #gluster
16:09 jag3773 joined #gluster
16:09 social 3425  2.6 2101m    0 1.7g   64 2.0g 2640    4    0 S  20   0  1.0 glusterfs
16:10 social hmm
16:10 hagarth social: can you try setting nfs.drc to off?
16:10 social hagarth: to fix what issue?
16:12 social mkzero: that sounds like there's memleak too but it didn't get my attention as I was working on worse offenders :/ same usage here, node with a lot of small/big files flying around
16:12 hagarth social: is that the nfs process?
16:12 social hagarth: no, that's fuse mount
16:13 hagarth social: my suggestion can be ignored then
16:20 social hmm how does one start glusterfs mount in valgrind?
16:20 psyl0n joined #gluster
16:25 LoudNoises joined #gluster
16:25 theron joined #gluster
16:28 thogue joined #gluster
16:28 hagarth social: you can use the same command line that mount -t glusterfs produces for running a glusterfs client
16:28 samsamm joined #gluster
16:29 hagarth social: you would need to run glusterfs in non-daemon mode too
16:38 vpshastry joined #gluster
16:44 Norky joined #gluster
16:45 bdperkin_gone joined #gluster
16:46 bdperkin joined #gluster
16:46 vpshastry left #gluster
16:47 mattapp__ joined #gluster
16:52 bdperkin left #gluster
16:52 semiosis <promotion type="self" level="shameless"> http://www.gluster.org/2013/12/how-picture-marketing-is-using-and-extending-glusterfs/ </promotion>
16:53 purpleidea semiosis: ouuu
16:53 purpleidea nice
16:53 sroy_ joined #gluster
16:54 purpleidea semiosis: article says "irc.gnu.org" btw not freenode
16:58 semiosis i noticed that too
16:58 semiosis johnmark: whats up with that?
16:59 mattapp__ joined #gluster
16:59 purpleidea JMWbot: @remind get semiosis article updated from irc.gnu.org to freenode
16:59 JMWbot purpleidea: Okay, I'll remind johnmark when I see him. [id: 6]
17:00 semiosis wat?!
17:00 purpleidea semiosis: don't you read the ml?
17:00 purpleidea semiosis: http://gluster.org/pipermail/gluster-users/2013-December/038329.html
17:00 glusterbot Title: [Gluster-users] Introducing... JMWBot (the alter-ego of johnmark) (at gluster.org)
17:01 * purpleidea evil laugh
17:01 semiosis apparently i do not :/
17:01 purpleidea feel free to use it. if you set your own @remind, JMWbot will /msg you when he clears the task
17:06 dblack joined #gluster
17:06 hagarth semiosis: that was a great read, thanks for your kind words as well!
17:07 semiosis of course
17:12 purpleidea does hagarth == Hiram Chirino ?
17:14 kmai007 joined #gluster
17:18 _Bryan_ joined #gluster
17:19 semiosis purpleidea: chirino == Hiram Chirino
17:19 semiosis @seen chirino
17:19 glusterbot semiosis: chirino was last seen in #gluster 15 weeks, 6 days, 23 hours, 12 minutes, and 30 seconds ago: <chirino> semiosis: pong
17:19 semiosis pretty sure he's been in here since then, but doesnt say much
17:20 purpleidea semiosis: okay, i though hagarth was vijay, or maybe i'm confused
17:20 semiosis yes vijay
17:20 purpleidea cool
17:23 flakrat joined #gluster
17:28 SFLimey_ joined #gluster
17:30 Mo__ joined #gluster
17:36 diegows joined #gluster
17:38 jbd1 joined #gluster
17:47 badone joined #gluster
17:56 vpshastry joined #gluster
17:56 vpshastry left #gluster
18:06 rotbeard joined #gluster
18:16 tair joined #gluster
18:16 rwheeler joined #gluster
18:21 andreask joined #gluster
18:23 raghu joined #gluster
18:24 aravindavk joined #gluster
18:44 B21956 joined #gluster
19:05 dbruhn So close...
19:06 dbruhn 50GB to go, and I can shut off the last of my net app and windows boxes
19:11 zaitcev joined #gluster
19:13 davidbierce joined #gluster
19:21 tryggvil joined #gluster
19:21 psyl0n joined #gluster
19:39 sroy_ joined #gluster
19:40 aliguori joined #gluster
19:56 gdubreui joined #gluster
19:56 gdubreui joined #gluster
19:57 flakrat joined #gluster
20:01 lalatenduM joined #gluster
20:08 gdubreui joined #gluster
20:10 plarsen joined #gluster
20:10 tryggvil joined #gluster
20:12 jag3773 joined #gluster
20:20 mafrac joined #gluster
20:22 aliguori joined #gluster
20:23 mafrac Hello all
20:23 mafrac I have problems adding a new brick due to "Peer rejected" status
20:24 mafrac Trying this the problem is still: http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
20:24 glusterbot Title: Resolving Peer Rejected - GlusterDocumentation (at www.gluster.org)
20:24 mafrac Do you have any advice?
20:25 mafrac Restarting the new brick after "peer probe" fails
20:28 mafrac I appreciate your attention
20:36 LessSeen_ joined #gluster
20:36 LessSeen_ hey
20:38 LessSeen_ i can not seem to create a gluster volume on 2 ec2 ebs backed instances..  it always says that the local machine is not in 'Peer in Cluster' state
20:41 LessSeen_ i am totally new to gluster, so it is possible i am doing something wrong - using elastic IPs
20:48 badone joined #gluster
20:50 semiosis LessSeen_: i recommend creating new CNAME records just for gluster, like gluster1.mydomain.net & pointing those at the public-hostname of the EIP or instance
20:51 semiosis then in the instance, alias that same hostname to 127.0.0.1 in /etc/hosts
20:56 mafrac semiosis: Could you help me with the "peer rejected" problem, please?
20:56 psyl0n joined #gluster
20:56 semiosis i wrote that article on the wiki, thats all the help i have to give on the subject
20:56 semiosis bbl
20:56 sashko joined #gluster
20:57 LessSeen_ sweet thanks for the reply
20:57 sashko hey guys what's the current stable version for gluster, 3.3 or 3.4?
20:58 mafrac semiosis: thank you
20:58 smellis sashko: 3.4.1 I believe
20:58 LessSeen_ i have tried aliasing 127.0.0.1 in the etc/hosts file to no avail - just trying to get a proof-of-concept replicated volume
20:58 kmai007 GA is glusterfs3.4.1
20:58 sashko thanks smellis
20:59 sashko kmai007: gluster.org website is a bit confusing, says latest 3.4 but also offers latest GA 3.3
21:00 sashko JoeJulian: you around?
21:02 jag3773 joined #gluster
21:23 rwheeler joined #gluster
21:23 JoeJulian sashko: on and off
21:23 sashko JoeJulian: you probably know this best, is 3.3 current stable or 3.4, for production use?
21:24 JoeJulian I would use 3.4
21:33 olebra joined #gluster
21:36 sashko ok
21:36 sashko i'm confused since gluster.org says 3.3 is for production, not sure why
21:36 sashko maybe someone forgot to update?
21:38 kmai007 sashko: it hink 3.3 for production is because its the same level Redhat Storage is running at
21:39 sashko oh ok
21:39 sashko thanks
21:39 kmai007 for their enterprise supported version
21:39 sashko makes sense
21:39 JoeJulian I disagree. RHS uses 3.3 with patches hand-picked from 3.4.
21:39 sashko was non-blocking self-heal introduced in 3.3 or 3.4?
21:39 JoeJulian 3.3
21:42 sashko thanks
21:42 kmai007 mafrac:
21:42 kmai007 mafrac: i had something like that happen over the weekend
21:42 kmai007 it was DEV so I didn't care if it blew up
21:42 kmai007 but I suppose i got lucky
21:43 kmai007 does your peer status out put show "rejected connected <hostname>
21:43 kmai007 ?
21:51 kmai007 mine was already connect, but got out of wack
21:51 kmai007 "gluster peer detach <hostname> force"
21:52 kmai007 "gluster peer proble <hostname>"
21:52 kmai007 "gluster volume sync <vol> <peername> "
21:52 kmai007 maybe you'll get lucky
21:53 psyl0n joined #gluster
21:54 davidbierce joined #gluster
22:14 neofob joined #gluster
22:22 [o__o] joined #gluster
22:35 _Bryan_ joined #gluster
22:48 mafrac kmai007: Yes, it's: "State: Peer Rejected (Connected)"
22:49 mafrac I've just tested your advice but no luck
22:51 psyl0n joined #gluster
22:58 RicardoSSP joined #gluster
22:58 RicardoSSP joined #gluster
22:58 psyl0n joined #gluster
23:08 psyl0n joined #gluster
23:10 mafrac I finally solved the “State: Peer Rejected (Connected)” problem by this way: http://pastebin.com/Q8aPWuDq
23:10 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
23:11 mafrac @paste
23:11 glusterbot mafrac: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
23:12 mafrac Here is better: http://fpaste.org/62688/
23:12 glusterbot Title: #62688 Fedora Project Pastebin (at fpaste.org)
23:34 psyl0n joined #gluster
23:46 mkzero quick question: is there any mechanism to time-out stale requests on glusters fuse mount? we are running a web-app and it seems that when a request for a particular file is made that webserver process goes into uninterruptable sleep(D state) which is kind of annoying because those processes lurk around like zombies and never return until the machine is rebooted
23:49 JoeJulian mkzero: That shouldn't be happening. Worst-case those should timeout after half an hour.
23:49 JoeJulian What version?
23:51 semiosis mkzero: i ran into that when mixing incompatible glusterfs versions.  had to stop & start the volume to free up access to the file.
23:52 Gilbs1 joined #gluster
23:53 mkzero JoeJulian: currently have processes from ~1h ago. running the whole cluster on 3.4.1(clients and servers)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary