Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 chirino_m joined #gluster
00:17 jag3773 joined #gluster
00:30 yinyin_ joined #gluster
00:46 chirino joined #gluster
00:53 MrAbaddon joined #gluster
00:57 bala joined #gluster
00:58 sjm joined #gluster
01:14 doekia joined #gluster
01:19 vpshastry joined #gluster
01:34 glusterbot New news from newglusterbugs: [Bug 1099683] Silent error from call to realpath in features/changelog/lib/src/gf-history-changelog.c <https://bugzilla.redhat.com/show_bug.cgi?id=1099683>
01:38 gdubreui joined #gluster
01:46 bala joined #gluster
02:02 DV_ joined #gluster
02:04 glusterbot New news from newglusterbugs: [Bug 1099690] unnecessary code in gf_history_changelog_done() <https://bugzilla.redhat.com/show_bug.cgi?id=1099690>
02:05 yinyin- joined #gluster
02:06 plarsen joined #gluster
02:06 harish joined #gluster
02:11 mattappe_ joined #gluster
02:16 DV_ joined #gluster
02:21 vimal joined #gluster
02:50 badone joined #gluster
02:55 baojg joined #gluster
02:59 DV_ joined #gluster
03:01 gmcwhistler joined #gluster
03:02 baojg joined #gluster
03:07 baojg_ joined #gluster
03:10 baojg joined #gluster
03:13 dusmant joined #gluster
03:17 baojg_ joined #gluster
03:18 yinyin_ joined #gluster
03:19 chirino_m joined #gluster
03:21 bharata-rao joined #gluster
03:21 plarsen joined #gluster
03:30 vpshastry joined #gluster
03:38 jmarley joined #gluster
03:39 jmarley joined #gluster
03:41 kshlm joined #gluster
03:42 RameshN joined #gluster
03:44 kanagaraj joined #gluster
03:45 badone joined #gluster
03:52 itisravi joined #gluster
03:57 DV_ joined #gluster
04:06 kumar joined #gluster
04:06 shubhendu joined #gluster
04:08 kanagaraj joined #gluster
04:11 RameshN joined #gluster
04:12 jmarley joined #gluster
04:12 jmarley joined #gluster
04:14 ppai joined #gluster
04:23 haomaiwa_ joined #gluster
04:28 yinyin_ joined #gluster
04:28 kanagaraj joined #gluster
04:28 vpshastry joined #gluster
04:32 vpshastry left #gluster
04:38 badone joined #gluster
04:39 rastar joined #gluster
04:40 jmarley joined #gluster
04:41 Ark joined #gluster
04:44 RameshN joined #gluster
04:45 social joined #gluster
04:45 d-fence joined #gluster
04:52 psharma joined #gluster
04:52 bala joined #gluster
04:56 DV_ joined #gluster
04:59 baojg joined #gluster
04:59 ravindran1 joined #gluster
05:02 hagarth joined #gluster
05:02 spandit joined #gluster
05:05 kdhananjay joined #gluster
05:05 ramteid joined #gluster
05:08 ndarshan joined #gluster
05:10 lalatenduM joined #gluster
05:13 gmcwhistler joined #gluster
05:16 prasanthp joined #gluster
05:16 davinder2 joined #gluster
05:19 saurabh joined #gluster
05:20 marmalodak joined #gluster
05:23 nishanth joined #gluster
05:25 social joined #gluster
05:31 aravinda_ joined #gluster
05:35 kanagaraj joined #gluster
05:35 vpshastry joined #gluster
05:35 yinyin_ joined #gluster
05:43 jmarley joined #gluster
05:43 jmarley joined #gluster
05:50 raghu joined #gluster
05:53 vimal joined #gluster
06:01 calum_ joined #gluster
06:04 yinyin_ joined #gluster
06:04 yinyin_ joined #gluster
06:04 social joined #gluster
06:06 marmalodak I've been following this guide: http://www.gluster.org/community/documentation/index.php/QuickStart
06:06 glusterbot Title: QuickStart - GlusterDocumentation (at www.gluster.org)
06:06 d-fence joined #gluster
06:06 marmalodak and I can mount my gluster volume on a different machine well enough with mount -t glusterfs well enough
06:07 marmalodak but not with mount -t nfs ...
06:07 marmalodak nfs-server is off on the gluster host
06:07 marmalodak rpcbind is on on the gluster host
06:07 marmalodak showmount -e reports
06:08 marmalodak showmount -e reports clnt_create: RPC: Program not registered
06:08 marmalodak iptables is off
06:08 marmalodak selinux is disabled
06:09 marmalodak what else do I need to do?
06:10 marmalodak this is with fedora 20 fwiw
06:15 ktosiek joined #gluster
06:17 lalatenduM marmalodak, are you using nfs version 3 for nfs mount
06:17 lalatenduM I mean you should
06:19 aravindavk joined #gluster
06:26 tjikkun joined #gluster
06:27 hagarth marmalodak:  you can check the gluster nfs log file to see if it is starting up fine
06:30 rahulcs joined #gluster
06:33 jag3773 joined #gluster
06:35 glusterbot New news from newglusterbugs: [Bug 1086759] Add documentation for the Feature: Improved block device translator <https://bugzilla.redhat.com/show_bug.cgi?id=1086759> || [Bug 1077452] Unable to setup/use non-root Geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1077452>
06:41 ppai joined #gluster
06:42 Pupeno joined #gluster
06:43 gmcwhistler joined #gluster
06:44 Philambdo joined #gluster
06:46 ctria joined #gluster
06:51 edward1 joined #gluster
06:52 Honghui joined #gluster
06:54 nshaikh joined #gluster
06:55 karimb joined #gluster
07:05 glusterbot New news from newglusterbugs: [Bug 1086758] Add documentation for the Feature: Changelog based parallel geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1086758>
07:07 rahulcs_ joined #gluster
07:15 rahulcs joined #gluster
07:21 edward3 joined #gluster
07:25 ProT-0-TypE joined #gluster
07:29 rahulcs joined #gluster
07:30 rahulcs joined #gluster
07:30 gmcwhistler joined #gluster
07:33 ricky-ti1 joined #gluster
07:38 rahulcs joined #gluster
07:41 fsimonce joined #gluster
07:42 keytab joined #gluster
07:47 andreask joined #gluster
07:51 chirino joined #gluster
07:52 wgao joined #gluster
07:54 rahulcs joined #gluster
07:54 xavih joined #gluster
08:18 akay joined #gluster
08:20 akay hello hello
08:22 chirino_m joined #gluster
08:23 bharata-rao joined #gluster
08:23 hagarth hello
08:23 glusterbot hagarth: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:25 akay has anyone come across the problem of a rebalance causing a large amount of failures with the errors: "failed to do a stat on gv0-replicate-0 (No such file or directory)" " 0-gv0-client-0: remote operation failed: No such file or directory" "0-gv0-replicate-2: setxattr dict is null" ?
08:27 liquidat joined #gluster
08:30 haomaiwa_ joined #gluster
08:33 haomai___ joined #gluster
08:44 bharata-rao joined #gluster
08:46 xavih joined #gluster
08:47 kdhananjay joined #gluster
08:50 kdhananjay1 joined #gluster
08:59 xavih joined #gluster
09:01 rgustafs joined #gluster
09:02 nishanth joined #gluster
09:08 glusterbot New news from newglusterbugs: [Bug 1093594] Glfs_fini() not freeing the resources <https://bugzilla.redhat.com/show_bug.cgi?id=1093594>
09:10 bharata-rao joined #gluster
09:16 liquidat joined #gluster
09:24 chirino joined #gluster
09:24 rahulcs joined #gluster
09:28 _Bryan_ joined #gluster
09:36 fyxim_ joined #gluster
09:45 aravindavk joined #gluster
09:54 bharata-rao joined #gluster
10:02 ira joined #gluster
10:02 gmcwhistler joined #gluster
10:03 aravindavk joined #gluster
10:05 ctria joined #gluster
10:08 karimb joined #gluster
10:12 sjm joined #gluster
10:14 spandit joined #gluster
10:16 haomaiwa_ joined #gluster
10:17 bharata-rao joined #gluster
10:19 edward1 joined #gluster
10:22 gdubreui joined #gluster
10:23 haomai___ joined #gluster
10:24 Honghui joined #gluster
10:28 ramteid joined #gluster
10:28 kanagaraj joined #gluster
10:38 glusterbot New news from newglusterbugs: [Bug 1086766] Add documentation for the Feature: Libgfapi <https://bugzilla.redhat.com/show_bug.cgi?id=1086766>
10:39 giannello joined #gluster
10:41 vpshastry joined #gluster
10:52 RameshN joined #gluster
10:53 shubhendu joined #gluster
11:00 ctria joined #gluster
11:08 vpshastry2 joined #gluster
11:08 glusterbot New news from newglusterbugs: [Bug 1091677] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.com/show_bug.cgi?id=1091677>
11:10 andreask joined #gluster
11:10 gmcwhistler joined #gluster
11:14 tryggvil joined #gluster
11:17 DV_ joined #gluster
11:18 ppai joined #gluster
11:21 lkoranda joined #gluster
11:23 mattappe_ joined #gluster
11:40 scuttle_ joined #gluster
11:44 Honghui joined #gluster
11:51 vpshastry joined #gluster
11:52 ceiphas joined #gluster
11:52 ceiphas hi folks! i get this error recently a lot when working on a gluster share: /bin/rm: WARNING: Circular directory structure.
11:55 chirino_m joined #gluster
11:56 ceiphas i think my "rootfs on gluster" experiment has died today, i get too many really strange errors to get this to work really
12:04 ppai joined #gluster
12:08 glusterbot New news from newglusterbugs: [Bug 1099878] Need support for handle based Ops to fetch/modify extended attributes of a file <https://bugzilla.redhat.com/show_bug.cgi?id=1099878>
12:11 tdasilva joined #gluster
12:12 Pupeno joined #gluster
12:13 itisravi joined #gluster
12:14 mattapperson joined #gluster
12:17 harish joined #gluster
12:20 Pupeno joined #gluster
12:20 pk1 joined #gluster
12:20 pk1 edward1: ping
12:20 glusterbot pk1: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
12:20 pk1 edward1: Are you edward shishkin?
12:20 ndevos some people never learn!
12:21 pk1 ndevos: :-)
12:21 ndevos :)
12:21 ndarshan joined #gluster
12:22 pk1 ndevos: I sent the patch already for some spurious failure issue. Seems like edward is also looking into it. I want to inform him
12:22 pk1 ndevos: I sent a mail but he didn't respond
12:22 pk1 ndevos: So trying desparately to catch hold of the guy
12:23 ndevos pk1: edward works from the brno office, I think, maybe you can poke someone that office?
12:25 ndevos pk1: I've asked lczerner about edward1, iirc they sit close together
12:26 ndevos no response yet...
12:26 diegows joined #gluster
12:27 pk1 ndevos: Thanks :-)
12:27 edward1 pk1: yes, I am
12:28 pk1 edward1: hey I just sent the patch for that crypt.t issue. Would love your review.
12:29 pk1 edward1: http://review.gluster.org/7824
12:29 glusterbot Title: Gerrit Code Review (at review.gluster.org)
12:32 edward1 pk1: great, I'll take a look at this, thank you!
12:32 pk1 edward1: Thanks!
12:35 RameshN joined #gluster
12:36 shubhendu joined #gluster
12:36 Pupeno_ joined #gluster
12:38 sroy_ joined #gluster
12:41 Pupeno joined #gluster
12:42 prasanthp joined #gluster
12:50 Ark joined #gluster
12:54 japuzzo joined #gluster
12:55 prasanthp joined #gluster
12:57 primechuck joined #gluster
12:58 Pupeno Can gluster help if I want to have a file system distributed in EU and US servers? Most of the read-writes will happen locally, but most, not all.
13:00 ndevos Pupeno: it might be better to split the filesystem in two, and only write on one volume depending on the location, and mirror that volume to the other location with geo-replication
13:01 ndevos Pupeno: multi-master geo-replication is the feature that you would like, but that is not available yet...
13:03 Pupeno I see.
13:16 vpshastry joined #gluster
13:16 vpshastry left #gluster
13:19 calum_ joined #gluster
13:20 rwheeler joined #gluster
13:25 mattapperson joined #gluster
13:30 jobewan joined #gluster
13:32 ndk joined #gluster
13:35 davinder2 joined #gluster
13:36 harish joined #gluster
13:46 plarsen joined #gluster
13:46 mattapperson joined #gluster
13:56 kaptk2 joined #gluster
14:00 zaitcev joined #gluster
14:00 warci joined #gluster
14:03 sahina joined #gluster
14:09 glusterbot New news from newglusterbugs: [Bug 1099922] Unchecked buffer fill by gf_readline in gf_history_changelog_next_change <https://bugzilla.redhat.com/show_bug.cgi?id=1099922>
14:17 \malex\ joined #gluster
14:18 wushudoin joined #gluster
14:20 sage____ joined #gluster
14:21 mattappe_ joined #gluster
14:22 borreman_dk Hi, quick question. just upgraded servers + clients from 3.4 to 3.5. Now, running the post-upgrade quota script fails on all volumes unless i umount/mount the volumes on the clients. Is there a way to refresh the clients so I can avoid remounting?
14:31 borreman_dk also, on 3.4 on the clients the volume size = the quota size. Now on 3.5 I see the total brick size again. is this an optional setting?
14:32 sijis i'm seeing this error in my logs for a file giving i/o error. http://paste.fedoraproject.org/103812/14006827/
14:32 glusterbot Title: #103812 Fedora Project Pastebin (at paste.fedoraproject.org)
14:33 sijis based on the message, i should remove the file from all volumes except from the good one. if i have 2 nodes, gfs11,gfs12 and the copy on gfs11 looks OK.
14:33 sijis do i just go into the volume on gfs12 and remove the file?
14:33 sijis is it as simple as that?
14:34 sijis i've already tried gluster volume gbp3 heal ;/
14:37 haomaiwang joined #gluster
14:37 marmalodak I can mount my gluster volume with mount -t glusterfs but not with mount -t nfs
14:37 marmalodak using vers=3
14:40 haomai___ joined #gluster
14:47 bit4man joined #gluster
14:52 gmcwhistler joined #gluster
14:58 dbruhn joined #gluster
14:58 sjoeboo joined #gluster
14:59 hagarth joined #gluster
15:00 JustinClift *** Gluster Community Meeting time ***
15:00 JustinClift (in #gluster-meeting)
15:00 vlakshmanan joined #gluster
15:01 vlakshmanan Getting input/output error while reading file on gfs cleint
15:01 vlakshmanan anyone have sollution for this?
15:02 lalatenduM joined #gluster
15:03 kdhananjay joined #gluster
15:07 sjm joined #gluster
15:15 lpabon joined #gluster
15:19 micu joined #gluster
15:21 Guest80755 Can anyone tell me how to mount glusterfs as NFS?
15:23 sijis Guest80755: we just do mount -t glusterfs server:/volumenmame /mnt
15:23 marmalodak Guest80755: I'm trying to figure out the same thing
15:23 marmalodak sijis: that's not NFS
15:23 glusterbot New news from resolvedglusterbugs: [Bug 1090298] Addition of new server after upgrade from 3.3 results in peer rejected <https://bugzilla.redhat.com/show_bug.cgi?id=1090298>
15:24 sijis as i know, its is onw protocol.
15:24 marmalodak Guest80755: the things that I understand are to turn off other nfs servers, and mount with -o vers=3
15:24 hagarth Guest80755: s/glusterfs/nfs/ in the mount command
15:25 sijis a quick search shows http://gluster.org/community/documentation/index.php/Gluster_3.1:_Manually_Mounting_Volumes_Using_NFS
15:25 glusterbot Title: Gluster 3.1: Manually Mounting Volumes Using NFS - GlusterDocumentation (at gluster.org)
15:25 sijis looks like you can
15:26 marmalodak with shouwmount -e I get clnt_create: RPC: Program not registered
15:28 KennethWilke joined #gluster
15:30 hagarth marmalodak: you might want to check nfs.log in /var/log/glusterfs to see if gluster nfs started up fine
15:30 sprachgenerator joined #gluster
15:32 Guest80755 thanks guys I will try that
15:32 liquidat joined #gluster
15:32 bit4man joined #gluster
15:38 lmickh joined #gluster
15:38 marmalodak thanks hagarth, anything specific I should look for?
15:39 glusterbot New news from newglusterbugs: [Bug 1099955] self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1099955>
15:39 hagarth marmalodak: you might want to check for rpc registration failure messages or something similar.
15:42 marmalodak the last rpc-related item I see is [rpc-clnt.c:1685:rpc_clnt_reconfig] 0-gv0-client-1: changing port to 49152 (from 0)
15:45 marmalodak once I've created my gluster volume, are there any additional steps needed to start the nfs server for gluster?
15:46 ndevos marmalodak: no, it should all be pretty straight forward
15:46 marmalodak I turned off nfs-server
15:46 ndevos marmalodak: some things to check, 1) do you have the rpcbind service running, 2) do not have a different nfs-server installed/running
15:46 marmalodak I checked that rpcbind was indeed running
15:47 marmalodak iptables is off
15:47 ndevos marmalodak: oh, and ig the server has an nfs-export mounted, that can cause conflicts too
15:47 marmalodak I did all this on the machine that is the first brick
15:47 ndevos s/ig/if/
15:47 glusterbot What ndevos meant to say was: marmalodak: oh, and if the server has an nfs-export mounted, that can cause conflicts too
15:48 jbd1 joined #gluster
15:48 marmalodak I don't see any other volumes mounted via nfs
15:49 marmalodak rpcbind is the service that I can start with systemctl, right?
15:49 ndevos marmalodak: you can restart rpcbind (to clear any registrations) and restart glusterd (that starts the nfs process and it tries to register new)
15:53 glusterbot New news from resolvedglusterbugs: [Bug 1095596] Stick to IANA standard while allocating brick ports <https://bugzilla.redhat.com/show_bug.cgi?id=1095596>
15:56 vpshastry joined #gluster
15:57 daMaestro joined #gluster
15:59 chirino joined #gluster
16:09 glusterbot New news from newglusterbugs: [Bug 1095596] Stick to IANA standard while allocating brick ports <https://bugzilla.redhat.com/show_bug.cgi?id=1095596>
16:20 cvdyoung we have 4 files that seem to be locked, and the heal is never able to heal those files.  It keeps trying to heal those 4 files, but never completes it.  I am seeing a "stale file handle" in the brick log.  I also checked the 4 files it's trying to heal, and the brick that its telling me that its trying to heal them on is not the brick where the files reside.
16:26 Mo___ joined #gluster
16:30 chirino_m joined #gluster
16:36 hybrid512 joined #gluster
16:39 glusterbot New news from newglusterbugs: [Bug 1086783] Add documentation for the Feature: qemu 1.3 - libgfapi integration <https://bugzilla.redhat.com/show_bug.cgi?id=1086783>
16:45 bit4man joined #gluster
16:47 vpshastry left #gluster
16:50 kshlm joined #gluster
16:53 jag3773 joined #gluster
16:53 mjsmith2 joined #gluster
16:56 marmalodak gluster voluem status shows NFS Server on localhost                                 N/A     N       N/A
16:56 vpshastry joined #gluster
16:56 marmalodak how do I turn that on?
16:58 semiosis should be on by default.  maybe you need rpcbind/portmapper running?
16:59 marmalodak I don't know about portmapper, but rpcbind is indeed running
16:59 semiosis portmapper is what it's called on debians
16:59 marmalodak this is fedora 20
16:59 marmalodak thanks, semiosis
17:00 semiosis yw
17:00 semiosis also the kernel nfs server needs to be disabled
17:00 semiosis in order for the gluster nfs server to work
17:02 marmalodak I've turned off nfs-server
17:02 marmalodak is there a way to show volume options?
17:03 marmalodak e.g. gluster volume get nfs.server
17:03 semiosis gluster volume info $volname
17:03 semiosis may need to stop/start the volume
17:03 marmalodak nfs.disable: false
17:04 marmalodak nfs.register-with-portmap: on
17:05 vpshastry left #gluster
17:05 marmalodak stumped here, do not know what to try next
17:07 semiosis can you mount with an ,,(nfs) client
17:07 glusterbot To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
17:08 semiosis could also check the nfs server log, /var/log/glusterfs/nfs.log
17:08 semiosis afk
17:09 sjusthome joined #gluster
17:11 ndk joined #gluster
17:11 mjsmith2 joined #gluster
17:12 marmalodak http://pastebin.com/A7F8ier7
17:12 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:13 marmalodak http://fpaste.org/103871/
17:13 glusterbot Title: #103871 Fedora Project Pastebin (at fpaste.org)
17:16 kkeithley rpcbind.service is enabled and started?
17:16 vpshastry1 joined #gluster
17:16 kkeithley `systemctl enable rpcbind && systemctl start rpcbind`
17:16 marmalodak systemctl reports it as active and running
17:17 kkeithley and /var/run/rpc.statd.pid exists and has a valid PID?
17:18 marmalodak kkeithley: yes, pid 2202
17:19 marmalodak # ps -o command -p 2202
17:19 marmalodak /sbin/rpc.statd
17:20 marmalodak rpcuser   2202  0.0  0.0  50760  7908 ?        Ss   09:21   0:00 /sbin/rpc.statd
17:20 kkeithley okay, I'm puzzled by the log says it can't open the pid file. What are the permissions on the pid file. And the glusterfs and glusterfsd processes are running as root, right?
17:21 marmalodak -rw-r--r--   1 rpcuser rpcuser    5 May 21 09:21 rpc.statd.pid
17:22 marmalodak glusterfs glusterfsd glusterd are running as root
17:23 scuttle_ joined #gluster
17:23 marmalodak http://fpaste.org/103876/
17:23 glusterbot Title: #103876 Fedora Project Pastebin (at fpaste.org)
17:24 kkeithley yep, it all looks normal
17:24 kkeithley oh, I bet you have selinux turned on, right?
17:25 kkeithley try turning that off.
17:25 marmalodak getenforce returns  Disabled
17:25 kkeithley hmmm
17:26 kkeithley did you reboot after disabling it?
17:26 marmalodak yes
17:27 kkeithley I'm running out of ideas. ;-)
17:27 kkeithley all the obvious ones anyway
17:27 marmalodak thanks for trying
17:27 rotbeard joined #gluster
17:27 marmalodak not sure what to try next
17:28 marmalodak should the gluster volume be mounted globally?
17:29 semiosis nfs.log says it couldn't register with portmap.  why?
17:29 marmalodak this is the last part of my fstab http://fpaste.org/103880/69337314/
17:29 glusterbot Title: #103880 Fedora Project Pastebin (at fpaste.org)
17:30 marmalodak where /dev/sdb1 is the disk on which this brick lives
17:30 hagarth joined #gluster
17:31 semiosis marmalodak: that second fstab line looks malformed, a comma on the type
17:31 semiosis but none of this should affect registering with portmap, which is the error in the log
17:32 marmalodak semiosis: thanks
17:34 semiosis marmalodak: maybe try restarting rpcbind
17:36 semiosis restart rpcbind, then restart glusterd
17:40 zerick joined #gluster
17:41 marmalodak semiosis: it looks like gluster failed to come back up
17:41 marmalodak http://fpaste.org/103882/06940751/
17:41 glusterbot Title: #103882 Fedora Project Pastebin (at fpaste.org)
17:42 mjsmith2 joined #gluster
17:49 semiosis glustershd.log is the wrong log file, that's for the self heal daemon.  the glusterd log file is etc-glusterfs-glusterd.log
17:51 marmalodak http://fpaste.org/103887/
17:51 glusterbot Title: #103887 Fedora Project Pastebin (at fpaste.org)
17:52 semiosis hmm, that doesnt look too bad.  does gluster volume info work?
17:52 semiosis that will try to connect to glusterd on localhost
17:53 semiosis how about volume status?  is the nfs server running now?
17:54 semiosis what version of glusterfs is this?
17:54 semiosis and why does your glusterd.vol file have option transport-type rdma?
17:55 marmalodak http://fpaste.org/103890/
17:55 glusterbot Title: #103890 Fedora Project Pastebin (at fpaste.org)
17:55 semiosis my glusterd.vol file for 3.4.2 looks like this: http://pastie.org/9196660
17:55 marmalodak glusterfs-server-3.5.0-3.fc20.x86_64
17:55 marmalodak I don't know about rdma,
17:55 semiosis hmm ok, still unsure about that rdma option
17:56 semiosis somehow systemd is out of sync with the actual process
17:56 semiosis it's clearly running but looks like systemd thinks it's dead
17:56 semiosis idk what to do about that, but i would try killing glusterd with kill & attempting to start with with systemctl
17:57 theron joined #gluster
17:57 semiosis would love to stay & chat but have to go to the dentist :/
17:57 semiosis good luck
17:58 marmalodak thanks for the help semiosis
18:03 jbd1 joined #gluster
18:07 davinder joined #gluster
18:10 glusterbot New news from newglusterbugs: [Bug 1099986] Bad memcpy and buffer modification gf_history_changelog_next_change <https://bugzilla.redhat.com/show_bug.cgi?id=1099986>
18:10 vpshastry1 left #gluster
18:24 Pupeno joined #gluster
18:32 chirino joined #gluster
18:34 edward1 joined #gluster
18:36 wushudoin joined #gluster
18:45 tryggvil joined #gluster
18:48 B21956 joined #gluster
18:48 cvdyoung Anyone know how to fix a heal that's been trying to heal the same 4 files all day?  the files reside on a different brick, and the heal is trying to heal them on the wrong one.  Thanks!
18:49 rahulcs joined #gluster
18:50 JoeJulian Interesting. I can think of a couple possibilities. Are those 4 files opened and locked by some other application?
18:51 cvdyoung No, I thought the same thing.  The heal is saying they are on brick1a, but they are on 2a.... strange
18:53 JoeJulian The other idea that comes to mind is that there's a gfid link in $brick/.glusterfs/.glusterfs/indices/xattrop on 1a...
18:53 JoeJulian Did you check glustershd.log for errors?
18:55 cvdyoung let me look
18:58 _dist joined #gluster
19:02 rahulcs joined #gluster
19:07 Pupeno joined #gluster
19:07 ira joined #gluster
19:08 rahulcs joined #gluster
19:29 davinder2 joined #gluster
19:34 abyss_ joined #gluster
19:39 rahulcs_ joined #gluster
19:41 Pupeno joined #gluster
19:48 kmai007 joined #gluster
19:55 KennethWilke howdy guys, i have a newbie question about gluster: i just setup a pair and peered one to another; my primary server lists its peer by hostname and the peer lists the primary by IP address. If i want consistency across servers should i peer by hostname or do i need reverse dns records?
19:55 KennethWilke should i peer by ip for consistency* rather
19:55 semiosis ,,(hostnames)
19:55 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
19:56 semiosis we usually recommend using hostnames rather than IPs for peer addresses
19:56 n0de ^^
19:56 KennethWilke yeah i wanted to go with hostnames, but wanted to be consistent throughout and not have a mix of ips and hostnames
19:57 JoeJulian note the last sentence in that factoid
19:57 KennethWilke does gluster do reverse dns lookups on the ip?
19:57 semiosis so just probe back to the first host from the second, by hostname, and it will update
19:58 KennethWilke alrighty, i'll assume it does not and probe from everywhere
19:58 JoeJulian Not everywhere, just one other.
19:59 KennethWilke while i only have two peers at the moment i'm building automation that does not enforce that expectation
19:59 JoeJulian The first server does not use reverse dns, wouldn't know if you wanted to use shortnames or fqdn if it did, so you have to set the hostname for that first server by probing from one single other server.
19:59 JoeJulian @puppet
19:59 glusterbot JoeJulian: https://github.com/purpleidea/puppet-gluster
19:59 KennethWilke @salt
19:59 glusterbot KennethWilke: I do not know about 'salt', but I do know about these similar topics: 's3', 'samba', 'swift'
19:59 JoeJulian Oooh, salt.
20:00 KennethWilke yar! it makes me happy inside
20:00 JoeJulian I would still recommend examining purpleidea's puppet module. He handles a lot of corner cases.
20:00 semiosis @pepper
20:00 glusterbot semiosis: go away
20:00 JoeJulian lol
20:00 KennethWilke lol
20:00 semiosis @forget pepper
20:00 glusterbot semiosis: The operation succeeded.
20:02 KennethWilke i'm looking to improve salt's gluster integration as part of this too, so that's a good resource to have at hand
20:02 KennethWilke http://docs.saltstack.com/en/latest/ref/states/all/salt.states.glusterfs.html#module-salt.states.glusterfs this is what they have in there so far
20:02 glusterbot Title: 21.25.24. salt.states.glusterfs (at docs.saltstack.com)
20:03 tdasilva left #gluster
20:07 awayne_ joined #gluster
20:07 awayne_ hello all
20:08 awayne_ we've deleted some files in a volume (we're using swift over gluster, so deleted through swift), but for whatever reason the space isn't being reclaimed. is there something we're missing?
20:10 rahulcs_ joined #gluster
20:10 theron joined #gluster
20:13 sroy joined #gluster
20:22 ira joined #gluster
20:24 hflai joined #gluster
20:28 sjm left #gluster
20:30 badone joined #gluster
20:39 theron_ joined #gluster
20:45 jmarley joined #gluster
20:45 jmarley joined #gluster
20:49 theron joined #gluster
20:51 rahulcs joined #gluster
20:53 badone joined #gluster
21:11 gdubreui joined #gluster
21:12 JoeJulian perseverance?
21:22 purpleidea KennethWilke: (cc:JoeJulian) feel free to ping me with puppet-gluster questions... it's got a whole lot of new features now too.
21:24 KennethWilke thanks purpleidea
21:26 swebb joined #gluster
21:27 purpleidea KennethWilke: np
21:28 jmarley joined #gluster
21:28 jmarley joined #gluster
21:30 sroy__ joined #gluster
21:38 edward1 joined #gluster
21:40 glusterbot New news from newglusterbugs: [Bug 1100050] Can't write to quota enable folder <https://bugzilla.redhat.com/show_bug.cgi?id=1100050>
22:03 theron joined #gluster
22:26 kkeithley1 joined #gluster
22:27 edong23_ joined #gluster
22:30 partner_ joined #gluster
22:30 k3rmat joined #gluster
22:31 RobertLaptop joined #gluster
22:34 social__ joined #gluster
22:34 tjikkun_work_ joined #gluster
22:35 ueberall joined #gluster
22:35 ueberall joined #gluster
22:35 portante joined #gluster
22:35 foobar joined #gluster
22:36 m0zes joined #gluster
22:36 [o__o] joined #gluster
22:36 Slasheri joined #gluster
22:36 Slasheri joined #gluster
22:36 basso_ joined #gluster
22:38 qdk_ joined #gluster
22:39 harish joined #gluster
22:40 asku joined #gluster
22:43 saltsa_ joined #gluster
22:47 primusinterpares joined #gluster
22:47 xymox_ joined #gluster
22:47 nixpanic_ joined #gluster
22:47 nixpanic_ joined #gluster
22:48 foster_ joined #gluster
22:49 hchiramm_ joined #gluster
22:54 decimoe joined #gluster
22:56 lmickh joined #gluster
22:56 edong23 joined #gluster
22:56 sadbox joined #gluster
22:56 dblack_ joined #gluster
22:57 Intensity joined #gluster
22:58 hflai joined #gluster
23:05 DV joined #gluster
23:09 crashmag joined #gluster
23:10 B21956 joined #gluster
23:10 marcoceppi joined #gluster
23:10 marcoceppi joined #gluster
23:11 kkeithley joined #gluster
23:11 qdk_ joined #gluster
23:12 jcsp joined #gluster
23:19 XpineX_ joined #gluster
23:21 Ark joined #gluster
23:21 harish joined #gluster
23:27 XpineX__ joined #gluster
23:38 chirino_m joined #gluster
23:41 MugginsM joined #gluster
23:42 MugginsM so, now our gluster is seeing awful performance under relatively light load :-/
23:43 JoeJulian what changed?
23:44 MugginsM I don't know, possibly our usage patterns
23:44 MugginsM getting READ latencies of 22,000us
23:44 MugginsM disks and cpu all show light usage
23:44 JoeJulian 22 and 0 are pretty good.
23:44 MugginsM no, 22000
23:45 JoeJulian Oh you people and your oxford commas instead of decimals.
23:45 MugginsM we have a lot of subdirectories recently, wonder if that's related
23:45 MugginsM 6000 subdirs in a commonly used palce
23:45 semiosis 22,000us is like 22ms?
23:46 MugginsM nuts, I get us and ms the wrong way around a lot, maybe that's not causing the slow
23:46 semiosis idk what normal latency is for your disks
23:46 MugginsM doing an "ls" on the servers own mount takes 30 seconds or so
23:46 semiosis i think rotating media is ~5ms, iirc
23:46 MugginsM clients are getting timeouts
23:47 MugginsM dropped off quite suddenly about an hour ago
23:47 JoeJulian I would start with memory usage
23:47 MugginsM I've just restarted one of the (two, replicated) server daemons
23:47 MugginsM heh, it's using 6G of it's 128G
23:48 JoeJulian Oh good.
23:48 JoeJulian So you're not in swap.
23:48 MugginsM it's a 32 core, with load avg of 0.1  (so that's 1/10th of a core)
23:48 MugginsM iowait is effectively 0 on net and disk
23:49 MugginsM it froze a couple of days ago. had to restart both server daemons to get it back
23:49 MugginsM worried it's about to happen again
23:49 MugginsM ver 3.4.2
23:49 JoeJulian Network hardware?
23:50 MugginsM broadcom BCM5719 with latest firmware. running 2 1GB ports bonded
23:50 MugginsM Ubuntu Precise OS (kernel 3.11 to get the best broadcom driver we could)
23:51 MugginsM self-heal daemon is seeing a dozen or so gfid's it can't find files for
23:51 MugginsM but nothing else suspicious
23:53 MugginsM we've got 10 or so clients, trying to hit it as hard as they can, and each seeing less than 1/10th of the perf they usually get
23:53 MugginsM but no obvious bottleneck :-/
23:53 JoeJulian I would probably look at "gluster volume profile" to start figuring that out. Right now I have to go catch a train.
23:53 MugginsM is 6000 directories in one parent likely to cause issues?
23:53 JoeJulian ... and maybe wireshark
23:54 MugginsM yeah, am at the moment
23:54 MugginsM thanks
23:54 JoeJulian Not with that kernel version.
23:59 MugginsO joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary