Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 sprachgenerator joined #gluster
00:34 scuttlemonkey_ joined #gluster
00:35 scuttlemonkey joined #gluster
00:42 Antonio87 joined #gluster
01:13 jmarley joined #gluster
01:32 glusterbot` joined #gluster
01:41 fubada joined #gluster
01:41 harish_ joined #gluster
01:41 Alene_Kiehn79 joined #gluster
01:45 cristov_mac joined #gluster
01:45 plarsen joined #gluster
01:47 cristov_mac joined #gluster
01:49 cristov_mac joined #gluster
01:50 cristov_mac joined #gluster
01:51 cristov_mac hi gluster~. i'm just wondering how to determine established client to gluster server over nfs? how can i check this out?
01:52 mrcuongnv joined #gluster
01:53 mrcuongnv hi there
01:53 mrcuongnv I got the following error
01:53 mrcuongnv [2014-09-22 09:37:02.873688] E [client3_1-fops.c:2228:client3_1_lookup_cbk] 0-dr7-client-9: remote operation failed: Stale file handle
01:53 mrcuongnv [2014-09-22 09:37:02.873781] E [client3_1-fops.c:2228:client3_1_lookup_cbk] 0-dr7-client-8: remote operation failed: Stale file handle
01:53 mrcuongnv ...and the mount point is not available after that
01:54 mrcuongnv it is a distributed replicated volumes over 14 servers (6 bricks/server), replica 2
01:55 mrcuongnv it might be some updating while reading
01:55 mrcuongnv but I wonder why it causes the crash on the mount point
01:55 mrcuongnv do you guys have any idea?
01:56 mrcuongnv Gluster 3.2.7
01:58 haomaiwa_ joined #gluster
02:07 RicardoSSP joined #gluster
02:07 RicardoSSP joined #gluster
02:12 haomai___ joined #gluster
02:13 haomaiw__ joined #gluster
02:17 haomai___ joined #gluster
02:29 haomaiw__ joined #gluster
02:49 Anita_Farrell94 joined #gluster
03:50 itisravi joined #gluster
03:53 Alford_Cummings joined #gluster
03:55 nbalachandran joined #gluster
03:57 haomaiwa_ joined #gluster
04:02 haomaiw__ joined #gluster
04:03 rjoseph joined #gluster
04:03 rjoseph joined #gluster
04:06 shubhendu joined #gluster
04:08 Intensity joined #gluster
04:09 coredump joined #gluster
04:17 ndarshan joined #gluster
04:28 kanagaraj joined #gluster
04:31 nishanth joined #gluster
04:38 RameshN joined #gluster
04:39 anoopcs joined #gluster
04:45 Rafi_kc joined #gluster
04:45 rafi1 joined #gluster
04:45 rafi1 joined #gluster
04:49 deepakcs joined #gluster
04:50 ramteid joined #gluster
04:53 R0ok_ R0ok_++
04:53 glusterbot R0ok_: Error: You're not allowed to adjust your own karma.
05:00 spandit joined #gluster
05:02 Rafi_kc joined #gluster
05:06 dusmant joined #gluster
05:12 ryan_clough joined #gluster
05:17 hagarth joined #gluster
05:20 aravindavk joined #gluster
05:22 R0ok_ mrcuongnv: stale file handle is probably caused by connection timeout, u mounting on nfs ?
05:27 kshlm joined #gluster
05:28 jiffin joined #gluster
05:28 harish_ joined #gluster
05:29 meghanam joined #gluster
05:29 meghanam_ joined #gluster
05:30 zerick joined #gluster
05:35 Philambdo joined #gluster
05:38 side_control joined #gluster
05:46 rejy joined #gluster
06:00 bala joined #gluster
06:01 pkoro joined #gluster
06:01 soumya__ joined #gluster
06:02 kumar joined #gluster
06:03 fubada joined #gluster
06:05 atalur joined #gluster
06:07 RaSTar joined #gluster
06:11 saurabh joined #gluster
06:19 mrcuongnv @R0ok: hi, I don't use NFS
06:20 mrcuongnv Gluster's NFS error message should be "Stale NFS file handle"
06:20 mrcuongnv I traced the source code and saw that this is a standard error message from the system
06:21 mrcuongnv I means ESTALE
06:23 ppai joined #gluster
06:27 dusmant joined #gluster
06:31 fubada joined #gluster
06:32 overclk joined #gluster
06:39 glusterbot New news from newglusterbugs: [Bug 1145911] [SNAPSHOT]: Deletion of a snapshot in a volume or system fails if some operation which acquires the volume lock comes in between. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145911>
06:39 fubada joined #gluster
06:43 ekuric joined #gluster
06:43 fsimonce joined #gluster
06:44 R0ok_ mrcuongnv: so,
06:44 R0ok_ mrcuongnv: i think ESTALE is nfs related. anway, what mount options are you using on the client side ?
06:49 ThatGraemeGuy joined #gluster
06:52 hagarth joined #gluster
06:53 ricky-ti1 joined #gluster
06:53 gildub joined #gluster
06:55 elico joined #gluster
06:59 mrcuongnv I use default mount options, no "-o"
07:04 ndevos mrcuongnv: a "stale file handle" can indeed happen on any network filesystem, it merely informs you that the file on the serverside has been changed/moved/deleted/... while the client that got the ESTALE was still assuming it could use it
07:05 saurabh joined #gluster
07:05 _pol joined #gluster
07:06 Fen1 joined #gluster
07:11 dusmant joined #gluster
07:13 raghu joined #gluster
07:18 aravindavk joined #gluster
07:21 R0ok_ mrcuongnv: i normally mount on client side using type as glusterfs & netdev options: -t glusterfs -o='defaults,acl,_netdev'
07:24 dmachi joined #gluster
07:43 lalatenduM joined #gluster
07:58 mrcuongnv ndevos: hi, thanks! I have checked the log at the client side and saw that the disconnected mount point is not related to "stale file handle" error.
07:59 mrcuongnv R0ok_: Ahh, I use _netdev in fstab, obviously. But no acl.
08:00 aravindavk joined #gluster
08:00 Slydder joined #gluster
08:00 Slydder hey all
08:01 hagarth joined #gluster
08:02 R0ok_ mrcuongnv: what about any volume options that you've added/removed recently ? can you provide us with the output of 'gluster volume info <VOLNAME>'
08:03 R0ok_ mrcuongnv: you should also be able to reach the server from the client via glusterd management port(24007), & also on the server you should open the bricks port
08:04 deepakcs joined #gluster
08:15 liquidat joined #gluster
08:16 fattaneh joined #gluster
08:17 jiku joined #gluster
08:29 RameshN joined #gluster
08:32 dusmant joined #gluster
08:37 jiffin1 joined #gluster
08:38 dusmantkp_ joined #gluster
08:45 nshaikh joined #gluster
08:52 hagarth joined #gluster
09:10 glusterbot New news from newglusterbugs: [Bug 1145989] package POSTIN scriptlet failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145989> || [Bug 1145992] package POSTIN scriptlet failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1145992>
09:10 milka joined #gluster
09:11 harish_ joined #gluster
09:13 jmarley joined #gluster
09:19 vikumar joined #gluster
09:25 gildub joined #gluster
09:39 nshaikh joined #gluster
09:50 ndarshan joined #gluster
10:15 diegows joined #gluster
10:18 RameshN joined #gluster
10:19 spandit joined #gluster
10:24 Pupeno joined #gluster
10:26 Pupeno joined #gluster
10:28 doekia joined #gluster
10:29 hagarth joined #gluster
10:31 karnan joined #gluster
10:44 dusmantkp_ joined #gluster
10:48 kdhananjay joined #gluster
10:49 ricky-ticky1 joined #gluster
10:57 RameshN joined #gluster
11:03 kanagaraj joined #gluster
11:04 LebedevRI joined #gluster
11:18 ppai joined #gluster
11:31 B21956 joined #gluster
11:36 rjoseph joined #gluster
11:41 tty00 joined #gluster
11:43 spandit joined #gluster
11:49 Chr1s1an joined #gluster
12:01 itisravi joined #gluster
12:01 JustinClift *** Weekly GlusterFS Community Meeting starting in #gluster-meeting on irc.freenode.net ***
12:05 rajesh joined #gluster
12:09 hagarth joined #gluster
12:11 julim joined #gluster
12:15 Fen1 joined #gluster
12:16 rwheeler joined #gluster
12:17 kkeithley1 joined #gluster
12:21 chirino joined #gluster
12:35 ppai joined #gluster
12:36 chirino joined #gluster
12:37 chirino_m joined #gluster
12:38 partner joined #gluster
12:38 chirino joined #gluster
12:38 Fen1 Hi, can i mount the same storage on two different place ? And do some synchro between ?
12:43 partner Fen1: you might want to refrase your question? if you mount glusterfs volume into two places the both will see the same content, you don't need to do anything
12:44 virusuy joined #gluster
12:44 virusuy joined #gluster
12:48 Fen1 Ok, so i can mount the same volume twice ?
12:49 blubberdi joined #gluster
12:54 partner sure
12:55 blubberdi Hello, I have a replicated gluster volume over 4 bricks (on 4 peers) with gluster 3.5.2. Every peer also has this volume mounted with the glusterfs fuse client. A web application running on this servers saves uploads into the glusterfs and after that a delayed job should import the uploaded files. But for some of I uploads I get a "No such file or directory" even if the delayed job starts 10 seconds after the upload is finished.
12:55 blubberdi If I later try to access the file manually, I don't have any problem. The file is there and is correct. This is everything I found in the gluster logs for it http://pastie.org/private/bljn1x0berl29mkxvttydw
12:55 glusterbot Title: Private Paste - Pastie (at pastie.org)
12:56 partner i've mounted my main volume to dozens of places thought mostly read-only as with writes from several locations to same files can result issues (same can happen with same machine with several procs altering same content..)
12:56 msmith_ joined #gluster
13:00 tdasilva joined #gluster
13:13 jiku joined #gluster
13:15 chirino joined #gluster
13:21 theron joined #gluster
13:22 klaxa|work joined #gluster
13:25 klaxa|work does anyone know of gstatus? how do i contact the maintainer? i wanted to run it on our cluster and it didn't work, so i fixed it and wanted to tell him
13:26 XpineX joined #gluster
13:30 hagarth klaxa|work: https://forge.gluster.org/~paulc is the maintainer. I can share his contact details in pm.
13:30 klaxa|work if you think he would be okay with it that would be great
13:30 coredump joined #gluster
13:34 theron joined #gluster
13:41 jmarley joined #gluster
13:48 deepakcs joined #gluster
13:52 Slydder I have been looking for a copy of the docs that explain all the configuration options available for glusterd for the "volume management" block in /etc/glusterfs/glusterd.vol
13:52 Slydder anyone have a link where all the options are listed and explained?
13:57 lmickh joined #gluster
13:58 R0ok_ Slydder: there's an official administrator's guide doc for gluter 3.3.0, but i haven't found any other official docs for any later versions of gluster
13:59 deepakcs joined #gluster
14:00 Slashman joined #gluster
14:02 calum_ joined #gluster
14:15 blubberdi R0ok_: http://gluster.org/documentation/ there is a link to the administrator guide (https://github.com/gluster/glusterfs/tre​e/master/doc/admin-guide/en-US/markdown)
14:15 glusterbot Title: Get Your Gluster-Fu! Gluster (at gluster.org)
14:19 wushudoin| joined #gluster
14:26 Fen1 joined #gluster
14:28 fattaneh1 joined #gluster
14:33 dblack joined #gluster
14:37 R0ok_ @glusterbot: y u give me low karma ?
14:40 fattaneh1 left #gluster
14:42 ndevos R0ok_++ :)
14:42 glusterbot ndevos: R0ok_'s karma is now 1
14:46 R0ok_ ndevos++ thanx man
14:46 glusterbot R0ok_: ndevos's karma is now 4
14:48 R0ok_ I keep getting these messages in the logs on client side mount:[2014-09-24 13:01:42.998945] I [dict.c:370:dict_get] (-->/usr/lib64/glusterfs/3.5.2/xlator/pe​rformance/md-cache.so(mdc_lookup+0x318) [0x7f8be33c9518] (-->/usr/lib64/glusterfs/3.5.2/xlator/deb​ug/io-stats.so(io_stats_lookup_cbk+0x113) [0x7f8be31aec63] (-->/usr/lib64/glusterfs/3.5.2/xlator/syste​m/posix-acl.so(posix_acl_lookup_cbk+0x1e1) [0x7f8be2fa0381]))) 0-dict: !this || key=system.posix_acl_access
14:48 glusterbot R0ok_: ('s karma is now -42
14:48 glusterbot R0ok_: ('s karma is now -43
14:48 glusterbot R0ok_: ('s karma is now -44
14:49 R0ok_ ndevos: ^^^^
14:50 R0ok_ currently the log file is at 3.5GB
14:54 cmtime joined #gluster
14:57 ndevos R0ok_: have you mounted the brick filesystem with acl support?
14:57 _Bryan_ joined #gluster
14:58 ndevos and that poor (!
15:05 R0ok_ ndevos: yea, mounted the volume using acl, -o='defaults,acl,_netdev'
15:07 shubhendu joined #gluster
15:07 ndevos R0ok_: can you check if the filesystem on the brick can use acls?
15:08 coredump joined #gluster
15:08 Guest42780 joined #gluster
15:09 R0ok_ ndevos: yea, the underlying FS on the brick is xfs
15:09 daMaestro joined #gluster
15:10 R0ok_ ndevos: xfs supports acls
15:11 R0ok_ ndevos: we've mounted the volume on a client server as a backend storage for our local centos & fedora mirror
15:11 ndevos R0ok_: yeah, that shouldnt be the issue then :-/
15:11 R0ok_ ndevos: ^^^
15:13 ndevos R0ok_: and on the client, how are you mounting?
15:13 ndevos is that nfs or fuse?
15:14 R0ok_ ndevos: its mounted on fuse
15:15 ndevos R0ok_: sorry, I have no idea atm, maybe you should file a bug for this and include steps how to reproduce it
15:15 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:16 sprachgenerator joined #gluster
15:16 R0ok_ ndevos: volume details: http://paste.fedoraproject.org/136118/11571768
15:16 glusterbot Title: #136118 Fedora Project Pastebin (at paste.fedoraproject.org)
15:17 ndevos R0ok_: looks good to me...
15:19 jbrooks joined #gluster
15:22 R0ok_ ndevos: am starting to suspect that it might be caused by rsync's --acl option, because we use the volume on our local centos & fedora mirror server, which basically rsync from another upstream server every 6hrs
15:23 ndevos R0ok_: hmm, maybe... it could be that rsync tries to set the system.posix_acl_access xattr, or something like that
15:23 ndevos R0ok_: I'll be afk now, do file a bug for this so that we can follow up later
15:23 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:28 kshlm joined #gluster
15:30 raghug joined #gluster
15:30 R0ok_ ndevos: thanx man, lemmie document this issue & write a bug report
15:31 raghug JustinClift: ping
15:31 glusterbot raghug: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
15:35 failshell joined #gluster
15:45 B21956 joined #gluster
15:46 lmickh joined #gluster
15:48 msmith_ joined #gluster
15:53 dtrainor_ joined #gluster
15:59 jobewan joined #gluster
16:17 ryan_clough joined #gluster
16:23 fattaneh1 joined #gluster
16:32 arosen joined #gluster
16:33 arosen Hi, i have a quick question. We have a gluster cluster deployed with openstack and are noticing an interesting problem.
16:33 oxidane joined #gluster
16:34 arosen I believe the issue is when we deploy a snapshot to multiple servers at once each server is pull down the image and writing it to the same place on the gluster mount point. Would this be causing a data consistency issue?
16:34 arosen Does gluster do anything to ensure consistency if multiple people are writing the same thing to the same place?
16:38 semiosis pretty sure gluster supports posix locks on fuse &  since 3.3 also nfs clients
16:38 semiosis but i'm not sure i understand what you mean by deploy a snapshot, pull down the image, etc
16:39 arosen semiosis: yup so if i deploy several vms at the same time from a snapshot i just took
16:40 arosen all the compute nodes pull down the snapshot to the shared mount at the same time.
16:40 semiosis ok so i really dont know anythign about the vm management system you're using
16:40 arosen I think this is what is causing things not to work
16:40 arosen semiosis:  sorry yea i shouldn't have talked about those details.
16:40 arosen semiosis:  let me find out which version of gluster we have deployed.
16:40 semiosis are you using fuse, nfs, or libgfapi clients?
16:42 arosen semiosis:  fuse
16:42 semiosis hmm, perhaps theres some option to improve consistency for your use case. let me find something...
16:43 semiosis have you done this? https://access.redhat.com/documentation/en-US/​Red_Hat_Storage/2.0/html/Quick_Start_Guide/cha​p-Quick_Start_Guide-Virtual_Preparation.html
16:43 glusterbot Title: Chapter 3. Managing Virtual Machine Images on Red Hat Storage Servers (at access.redhat.com)
16:43 semiosis adding the virt option group?
16:43 arosen semiosis:  it's mounted as glusterfs
16:43 arosen glusterfs 3.3.2 built on Jul 21 2013 16:38:55
16:44 semiosis that's kinda old
16:44 semiosis might want to upgrade to 3.4
16:44 arosen semiosis:  i don't manage it :( just a user :/
16:44 semiosis but check that virt option group
16:44 arosen let me get them to check
16:44 semiosis i suspect that might help, although this isn't really my area of expertise
16:44 semiosis at least it shouldn't hurt (i hope)
16:47 arosen semiosis:  do you know what: /var/lib/glusterd/groups/virt would be on ubuntu?
16:49 _pol joined #gluster
16:54 semiosis same
16:55 semiosis in any case, if you cant do the group thing, just set those six options listed in the doc individually
16:55 semiosis maybe the option group is only in RHS?  i dont know
16:56 klaxa joined #gluster
16:58 _pol joined #gluster
17:00 theron joined #gluster
17:02 _pol joined #gluster
17:03 _pol_ joined #gluster
17:04 PeterA joined #gluster
17:08 JoeJulian arosen: It's in the source. Same directory regardless of distro
17:09 JoeJulian https://forge.gluster.org/glusterfs-core/glusterf​s/blobs/master/cli/src/cli-cmd-parser.c#line1020
17:09 glusterbot Title: cli/src/cli-cmd-parser.c - glusterfs in GlusterFS Core - Gluster Community Forge (at forge.gluster.org)
17:14 sputnik13 joined #gluster
17:14 _pol joined #gluster
17:25 if-kenn_ joined #gluster
17:33 _pol joined #gluster
17:37 Pupeno joined #gluster
17:46 ricky-ti1 joined #gluster
17:48 dtrainor__ joined #gluster
17:50 dtrainor__ joined #gluster
17:52 ekuric joined #gluster
17:55 dtrainor__ joined #gluster
17:57 Pupeno joined #gluster
18:02 Pupeno joined #gluster
18:03 Pupeno joined #gluster
18:17 fattaneh1 joined #gluster
18:27 dblack joined #gluster
18:28 RaSTar joined #gluster
18:29 RaSTar joined #gluster
18:36 _pol joined #gluster
18:36 ThatGraemeGuy joined #gluster
18:42 glusterbot New news from newglusterbugs: [Bug 1136221] The memories are exhausted quickly when handle the message which has multi fragments in a single record <https://bugzilla.redhat.co​m/show_bug.cgi?id=1136221> || [Bug 1146200] The memories are exhausted quickly when handle the message which has multi fragments in a single record <https://bugzilla.redhat.co​m/show_bug.cgi?id=1146200>
18:46 _pol joined #gluster
18:46 zerick joined #gluster
18:56 rwheeler joined #gluster
19:05 failshel_ joined #gluster
19:11 theron joined #gluster
19:12 if-kenn_ left #gluster
19:24 dmachi1 joined #gluster
19:26 chirino joined #gluster
19:42 dberry joined #gluster
19:43 longshot902 joined #gluster
19:54 ThatGraemeGuy joined #gluster
20:09 _pol_ joined #gluster
20:14 _pol joined #gluster
20:19 dtrainor joined #gluster
20:36 georgeh joined #gluster
20:39 pkoro joined #gluster
20:39 ryan_clough left #gluster
21:08 dblack joined #gluster
21:20 _pol joined #gluster
21:23 _pol_ joined #gluster
21:27 fubada joined #gluster
21:38 jbrooks joined #gluster
21:40 fubada joined #gluster
21:42 glusterbot New news from newglusterbugs: [Bug 1146263] Initial Georeplication fails to use correct GID on folders ONLY <https://bugzilla.redhat.co​m/show_bug.cgi?id=1146263>
21:43 dtrainor joined #gluster
21:44 dtrainor joined #gluster
21:49 theron_ joined #gluster
22:26 sprachgenerator joined #gluster
22:36 _pol_ joined #gluster
22:43 glusterbot New news from newglusterbugs: [Bug 1146279] Compilation on OSX is broken with upstream git master and release-3.6 branches <https://bugzilla.redhat.co​m/show_bug.cgi?id=1146279> || [Bug 1146281] Compilation on OSX is broken with upstream git master and release-3.6 branches <https://bugzilla.redhat.co​m/show_bug.cgi?id=1146281>
22:48 _pol joined #gluster
22:59 _pol joined #gluster
23:14 _pol joined #gluster
23:33 plarsen joined #gluster
23:39 MacWinner joined #gluster
23:50 fubada joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary