Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 sputnik13 joined #gluster
00:06 Pupeno joined #gluster
00:14 overclk joined #gluster
00:18 sputnik13 joined #gluster
00:24 overclk joined #gluster
00:25 sputnik13 joined #gluster
00:44 gildub joined #gluster
00:56 xleo joined #gluster
01:04 cjhanks joined #gluster
01:19 DV joined #gluster
01:39 meghanam_ joined #gluster
01:40 meghanam joined #gluster
02:01 stickyboy_ joined #gluster
02:01 plarsen_ joined #gluster
02:01 JoeJulian_ joined #gluster
02:01 stickyboy joined #gluster
02:01 meghanam_ joined #gluster
02:18 ryant joined #gluster
02:21 luckyinva joined #gluster
02:22 ryant Has anyone had any experience running gluster on an s3backer or s3ql based storage backend?
02:23 ryant I ask because I'm running gluster in EC2 in a distributed, replicated mode and am finding it to just be too unreliable as there's too many times we're running into split-brain type problems.  I'm hoping to drop replication and rely on a more robust storage backend to get a more reliable gluster.
02:24 ryant any experience along those lines here?:
02:41 stickyboy joined #gluster
02:43 cjhanks joined #gluster
02:44 cjhanks !balance
02:44 cjhanks left #gluster
03:00 gEEbusT joined #gluster
03:00 harish_ joined #gluster
03:02 dtrainor joined #gluster
03:02 nullck joined #gluster
03:02 anoopcs joined #gluster
03:02 rjoseph joined #gluster
03:03 anoopcs joined #gluster
03:07 atrius joined #gluster
03:52 ilbot3 joined #gluster
03:52 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:52 nishanth joined #gluster
03:55 shubhendu joined #gluster
04:01 nbalachandran joined #gluster
04:19 Rafi_kc joined #gluster
04:24 anoopcs joined #gluster
04:36 kanagaraj joined #gluster
04:39 ppai joined #gluster
04:47 ndarshan joined #gluster
04:47 meghanam joined #gluster
04:51 bharata-rao joined #gluster
04:52 kanagaraj joined #gluster
04:53 wushudoin joined #gluster
04:57 kdhananjay joined #gluster
05:04 cjhanks joined #gluster
05:04 cjhanks left #gluster
05:05 karnan joined #gluster
05:09 kdhananjay joined #gluster
05:11 jiffin joined #gluster
05:22 prasanth_ joined #gluster
05:29 ricky-ticky joined #gluster
05:37 hagarth joined #gluster
05:38 dusmant joined #gluster
05:41 deepakcs joined #gluster
05:41 overclk joined #gluster
05:42 raghu joined #gluster
05:45 overclk joined #gluster
05:56 ramteid joined #gluster
05:58 vpshastry joined #gluster
05:59 dusmant joined #gluster
06:04 sahina joined #gluster
06:07 kumar joined #gluster
06:14 overclk joined #gluster
06:21 lalatenduM joined #gluster
06:36 cjhanks_ joined #gluster
06:37 deepakcs_ joined #gluster
06:39 deepakcs joined #gluster
06:43 mikitochi_ joined #gluster
06:44 aravindavk joined #gluster
06:45 glusterbot New news from newglusterbugs: [Bug 1125814] Fix error code to return of posix_fsync <https://bugzilla.redhat.co​m/show_bug.cgi?id=1125814>
06:54 kanagaraj joined #gluster
06:54 ramteid joined #gluster
06:54 cjhanks_ left #gluster
06:58 getup- joined #gluster
06:59 psharma joined #gluster
07:05 psharma joined #gluster
07:06 LebedevRI joined #gluster
07:06 dtrainor_ joined #gluster
07:11 ctria joined #gluster
07:15 rastar joined #gluster
07:17 mikitochi__ joined #gluster
07:19 nbalachandran joined #gluster
07:20 aravindavk joined #gluster
07:20 rtalur_ joined #gluster
07:21 dusmant joined #gluster
07:21 haomaiwang joined #gluster
07:29 ekuric joined #gluster
07:30 rjoseph joined #gluster
07:36 ppai joined #gluster
07:49 Pupeno joined #gluster
07:55 ppai joined #gluster
07:57 aravindavk joined #gluster
07:57 doekia joined #gluster
07:57 doekia_ joined #gluster
08:00 nbalachandran joined #gluster
08:03 rjoseph joined #gluster
08:06 liquidat joined #gluster
08:07 dusmant joined #gluster
08:23 nishanth joined #gluster
08:35 fengkun02 joined #gluster
08:35 fengkun02 i have a problem, who can help me
08:36 fengkun02 # gluster v replace-brick test 10.96.35.33:/home/brick 10.96.45.43:/home/brick status
08:36 fengkun02 volume replace-brick: failed: Another transaction could be in progress. Please try again after sometime.
08:36 fengkun02 # gluster pool list
08:36 fengkun02 UUID                                    Hostname        State
08:36 fengkun02 ec129821-66f6-429d-a247-394da3c77a5e    10.96.45.43     Connected
08:36 fengkun02 d78d2718-c4a8-40be-9871-5cab6f5d53fc    10.96.33.18     Connected
08:36 fengkun02 7e9dbe48-1377-4e9b-bd69-2c4fae458363    10.96.35.33     Connected
08:36 fengkun02 8dea3b10-f785-4f67-9248-92962a0000b5    10.96.33.20     Connected
08:36 fengkun02 8dea3b10-f785-4f67-9248-92962a0000b5    10.96.33.20     Connected
08:37 fengkun02 7e9dbe48-1377-4e9b-bd69-2c4fae458363    10.96.35.33     Connected
08:37 fengkun02 ff965fc1-5bfc-48c2-bcc0-cfeecf9229b0    10.96.33.18     Connected
08:37 fengkun02 ec129821-66f6-429d-a247-394da3c77a5e    10.96.45.43     Connected
08:37 fengkun02 76688117-cf9d-4699-808a-c7852feb9f50    localhost       Connected
08:37 fengkun02 many repeat
08:37 fengkun02 why?
08:41 kanagaraj_ joined #gluster
08:45 glusterbot New news from newglusterbugs: [Bug 1125843] geo-rep: changelog_register fails when geo-rep started after session creation. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1125843>
08:47 fengkun02_ joined #gluster
08:48 vpshastry1 joined #gluster
08:50 fsimonce joined #gluster
09:02 vimal joined #gluster
09:07 wgao joined #gluster
09:21 LebedevRI joined #gluster
09:22 shubhendu_ joined #gluster
09:31 Slashman joined #gluster
09:39 vpshastry joined #gluster
09:40 nbalachandran joined #gluster
09:46 glusterbot New news from newglusterbugs: [Bug 1123294] [FEAT] : provide an option to set glusterd log levels other than command line flag <https://bugzilla.redhat.co​m/show_bug.cgi?id=1123294>
09:48 edward1 joined #gluster
09:51 dusmant joined #gluster
09:51 kanagaraj joined #gluster
09:55 ndarshan joined #gluster
09:57 ira joined #gluster
09:57 meghanam joined #gluster
09:58 Philambdo joined #gluster
09:59 rtalur_ joined #gluster
10:05 haomai___ joined #gluster
10:05 nthomas joined #gluster
10:05 ira_ joined #gluster
10:06 Slashman_ joined #gluster
10:11 Humble joined #gluster
10:15 LebedevRI joined #gluster
10:19 vpshastry1 joined #gluster
10:19 ppai joined #gluster
10:21 qdk joined #gluster
10:28 aravindavk joined #gluster
10:34 violuke joined #gluster
10:35 violuke Hi, this might be a silly question, but how can I fix a split-brain on /
10:35 violuke Unable to self-heal contents of '/' (possible split-brain). Please delete the file from all but the preferred subvolume
10:37 ndarshan joined #gluster
10:50 nbalachandran joined #gluster
10:52 dusmant joined #gluster
10:58 diegows joined #gluster
11:00 spandit joined #gluster
11:16 glusterbot New news from newglusterbugs: [Bug 1101111] [RFE] Add regression tests for the component geo-replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101111>
11:20 meghanam joined #gluster
11:26 kkeithley joined #gluster
11:39 ndarshan joined #gluster
11:49 calum_ joined #gluster
11:51 the-me joined #gluster
12:01 nthomas joined #gluster
12:02 sijis so i can't replace a brick if the volume isn't started?
12:03 haomaiwa_ joined #gluster
12:12 ricky-ti1 joined #gluster
12:16 harish_ joined #gluster
12:19 violuke joined #gluster
12:24 luckyinva joined #gluster
12:25 bala joined #gluster
12:38 rejy joined #gluster
12:44 ppai joined #gluster
12:45 Norky joined #gluster
13:07 hagarth joined #gluster
13:10 ctria joined #gluster
13:16 nbalachandran joined #gluster
13:27 bennyturns joined #gluster
13:28 chirino joined #gluster
13:39 mkzero joined #gluster
13:44 bala joined #gluster
13:46 tdasilva joined #gluster
13:46 Maya__ joined #gluster
13:48 sputnik13 joined #gluster
13:49 hagarth joined #gluster
14:01 rwheeler joined #gluster
14:06 ctria joined #gluster
14:16 DV joined #gluster
14:25 vpshastry joined #gluster
14:25 coredump joined #gluster
14:30 wushudoin joined #gluster
14:30 plarsen_ joined #gluster
14:30 plarsen joined #gluster
14:31 sjm joined #gluster
14:42 pasqd hi
14:42 glusterbot pasqd: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:43 pasqd where is the gluster cache stored for volumes?
14:46 ndevos pasqd: what do you mean with "gluster cache"?
14:53 cultav1x joined #gluster
14:53 sputnik13 joined #gluster
15:00 recidive joined #gluster
15:02 chirino joined #gluster
15:04 nueces joined #gluster
15:05 pasqd i mean performance.cache-size 1GB
15:05 cjhanks joined #gluster
15:05 pasqd i have replicated volume on 1gbps network
15:05 cjhanks left #gluster
15:06 pasqd and i really need to speedup a readings a little bit
15:06 pasqd so i want to make cache on machine where the mountings points are
15:06 pasqd my replicated volume is a backend for openstack
15:06 pasqd i have /var/lib/nova/instances mounted
15:07 pasqd and now i want to keep some files in my openstack box for read
15:07 sputnik13 joined #gluster
15:11 daMaestro joined #gluster
15:13 violuke joined #gluster
15:20 jbd123 joined #gluster
15:30 ndevos pasqd: such caching is all done in-memory, if you want caches on a local disk, you could look into using NFS+fscache+cachefilesd
15:30 sputnik13 joined #gluster
15:37 hybrid512 joined #gluster
15:42 xleo joined #gluster
15:42 ndk joined #gluster
15:46 plarsen joined #gluster
15:46 uebera|| joined #gluster
15:46 uebera|| joined #gluster
15:59 jbd123 joined #gluster
16:00 plarsen joined #gluster
16:06 dtrainor_ joined #gluster
16:19 dtrainor_ joined #gluster
16:21 chirino joined #gluster
16:22 chirino_m joined #gluster
16:39 sputnik13 joined #gluster
16:43 Peter1 joined #gluster
16:44 Peter1 how should i upgrade 3.5.1 to 3.5.2 with min user distruption?
16:44 Peter1 I have replica 2 and distrub volumes
16:44 Peter1 quota and NFS
16:45 Peter1 i am on ubunut
16:45 JoeJulian violuke: reset the ,,(extended attributes) for trusted.afr.* to all zeros on the brick roots.
16:45 glusterbot violuke: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
16:45 Peter1 should i do a apt-get upgrade then restart glusterfs-server ?
16:45 Peter1 or shutdown glusterfs-server and process and then upgrade?
16:46 JoeJulian Historically, we upgrade the clients first, then the servers.
16:46 Peter1 right
16:46 Peter1 then for the server?
16:47 violuke JoeJulian: thanks, I’ll give that a try
16:47 JoeJulian I would shutdown glusterfs-server, pkill -f glusterfs (unless you have client mounts on the server) upgrade the package, then start glusterfs-server
16:47 JoeJulian The reason for stopping any software before upgrading is to prevent segfaults with mmapped libraries.
16:48 vpshastry joined #gluster
16:48 Peter1 ic
16:48 Peter1 i will take that
16:48 Peter1 so for distrub volume, some files will disappear during upgrade
16:48 JoeJulian Right
16:48 Peter1 ok
16:48 Peter1 thanks JoeJulian!
16:49 Peter1 is 3.5.2 out yet for ubuntu?
16:49 JoeJulian semiosis: ^
16:49 semiosis i know
16:49 semiosis i'm sorry
16:50 Peter1 no sorry, you have been great! :)
16:50 Peter1 please remember to package the heal info  :)
16:50 semiosis ok
16:52 quaziland009 joined #gluster
16:54 JoeJulian heal info isn't something that needs packaged!
16:55 * semiosis one step closer already
16:57 Peter1 3.5 ubuntu debs don't have /usr/bin/glfsheal
16:57 Peter1 https://bugzilla.redhat.co​m/show_bug.cgi?id=1113778
16:57 glusterbot Bug 1113778: medium, unspecified, ---, pkarampu, ASSIGNED , gluster volume heal info keep reports "Volume heal failed"
16:58 JoeJulian lol...
16:58 JoeJulian CRS disease...
16:59 plarsen joined #gluster
17:00 JoeJulian It's hilarious that I'm all wondering what the heck you're talking about and *I* am the one that added the note that explained it.
17:00 JoeJulian <sigh>
17:00 JoeJulian I need a vacation.
17:00 plarsen joined #gluster
17:00 JoeJulian semiosis: When's a good time of year to come to Miami?
17:01 semiosis february
17:02 JoeJulian I don't suppose anyone has any thoughts on diagnosing this:
17:02 JoeJulian qemu-syst 14096 libvirt-qemu   14u   REG               0,29   2686844928 10842811032539920428 /var/lib/nova/instances/7b02610c-​cd2c-4dca-b340-c7e51b584199/disk (deleted)
17:03 JoeJulian Notice it says the image is deleted.... it's not.
17:03 Peter1 inode still exist?
17:04 JoeJulian ls -li on that file shows the same inode
17:05 Peter1 maybe the file was delete once then recreated with same name?
17:05 luckyinva joined #gluster
17:06 JoeJulian I /think/ it may have been migrated as part of a rebalance. I just wish I knew for sure what this means.
17:06 JoeJulian I guess I'll have to look at lsof to see where that flag comes from.
17:08 Maya_ joined #gluster
17:13 zerick joined #gluster
17:15 bala joined #gluster
17:24 sputnik13 joined #gluster
17:34 tg2 joined #gluster
17:35 bala joined #gluster
17:36 sputnik13 joined #gluster
17:38 lmickh joined #gluster
17:41 gehaxelt_ joined #gluster
17:47 wushudoin joined #gluster
17:50 gehaxelt_ joined #gluster
17:58 kkeithley ,,(ports)
17:58 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
18:03 _dist joined #gluster
18:05 vpshastry joined #gluster
18:05 sputnik13 joined #gluster
18:05 _dist JoeJulian: When you're free, I'm still getting "SETXATTR" operation not permitted on several files, these files do not have the "third" afr there. I'm wondering if you know a way I can tell what it's trying to set? Either way I'll add it to my bug report
18:08 luckyinva Unable to resolve the following issue - gluster will not start /
18:08 luckyinva [2014-08-01 17:03:23.761552] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.5.1 (/usr/sbin/glusterd --pi-file=/var/run/glusterd.pid)
18:08 luckyinva [2014-08-01 17:03:23.767626] I [glusterd.c:1122:init] 0-management: Using /var/lib/glusterd as working directory
18:08 luckyinva [2014-08-01 17:03:23.772217] I [socket.c:3561:socket_init] 0-socket.management: SSL support is NOT enabled
18:08 kumar joined #gluster
18:08 luckyinva [2014-08-01 17:03:23.772284] I [socket.c:3576:socket_init] 0-socket.management: using system polling thread
18:08 luckyinva [2014-08-01 17:03:23.773476] W [rdma.c:4194:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device)
18:08 luckyinva [2014-08-01 17:03:23.773494] E [rdma.c:4482:init] 0-rdma.management: Failed to initialize IB Device
18:08 luckyinva [2014-08-01 17:03:23.773503] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
18:08 luckyinva [2014-08-01 17:03:23.773557] W [rpcsvc.c:1521:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed
18:08 luckyinva [2014-08-01 17:03:23.773681] I [socket.c:3561:socket_init] 0-socket.management: SSL support is NOT enabled
18:08 luckyinva [2014-08-01 17:03:23.773723] I [socket.c:3576:socket_init] 0-socket.management: using system polling thread
18:08 luckyinva [2014-08-01 17:03:28.876263] I [glusterd-store.c:1421:glu​sterd_restore_op_version] 0-glusterd: retrieved op-version: 30501
18:08 semiosis @kick luckyinva
18:08 luckyinva was kicked by glusterbot: semiosis
18:09 * semiosis messaged luckyinva
18:10 luckyinva joined #gluster
18:10 semiosis sorry again about the kick
18:11 semiosis please paste your logs on pastie.org or similar and drop the link here
18:11 luckyinva my apologies to the room
18:11 luckyinva will do
18:12 luckyinva testing http://pastie.org/9437157
18:12 glusterbot Title: #9437157 - Pastie (at pastie.org)
18:13 luckyinva I have researched this and looked into or read recommendations/solutions but nothing Im seeing or reading solves this for me
18:13 luckyinva Im not lazy but I am in a bind
18:13 JoeJulian Yeah, there's nothing useful in the default logging level. Someone should file a bug report...
18:13 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
18:13 ron-slc joined #gluster
18:13 JoeJulian Try gusterd -d and see what is says.
18:15 luckyinva @joe.. Im sorry Im not doing something correct?
18:16 JoeJulian You're fine. The program isn't telling us what failed.
18:16 luckyinva tailing log while using service glusterd start
18:16 JoeJulian Try running glusterd in debug mode, ie. "glusterd -d" and see if that actually tells you what's wrong.
18:17 sputnik13 joined #gluster
18:18 luckyinva doesn’t appear i can do that with service glusterd
18:18 JoeJulian Not via service.
18:18 JoeJulian Just run it as root.
18:20 systemonkey joined #gluster
18:21 _dist JoeJulian: I updated the bug to show current status if you're curious, https://bugzilla.redhat.co​m/show_bug.cgi?id=1125418
18:21 glusterbot Bug 1125418: high, unspecified, ---, gluster-bugs, NEW , Remove of replicate brick causes client errors
18:27 Slashman joined #gluster
18:29 luckyinva http://pastie.org/9437812
18:30 JoeJulian bad link
18:30 luckyinva oddly it says it exists on one machine
18:30 luckyinva and not on another
18:30 luckyinva working on it
18:31 gehaxelt_ joined #gluster
18:31 JoeJulian @paste
18:31 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
18:32 JoeJulian If you want to do it the easy way
18:34 luckyinva http://pastie.org/9437187
18:34 glusterbot Title: #9437187 - Pastie (at pastie.org)
18:35 luckyinva that one appears to be working as expected
18:35 JoeJulian Unable to find friend: cilospfd0003.silver.com
18:35 JoeJulian hostname resolution problem
18:38 luckyinva peer file seems fine and /etc/hosts is good / host pings good
18:38 JoeJulian what's the ip for that host?
18:39 luckyinva 10.25.166.227
18:39 JoeJulian Huh, ok.. maybe it does resolve it then...
18:40 luckyinva its up and running and will peer probe successful back a this host also
18:40 luckyinva im at a loss
18:40 luckyinva there are 2 files on each peer that have the other hosts info
18:41 luckyinva they appear correct
18:43 JoeJulian Can you paste "gluster volume info" from a working server?
18:43 luckyinva sure
18:46 luckyinva http://pastie.org/9437208
18:46 glusterbot Title: #9437208 - Pastie (at pastie.org)
18:49 JoeJulian And you can ping, by fqdn, each of cilospfd000{2,3,4}.silver.com?
18:49 JoeJulian ... from the server that's failing of course
18:51 luckyinva will try one moment
18:53 luckyinva i can reach each host
18:54 JoeJulian How about a paste of "peer info" from two servers.
18:54 JoeJulian It looks like it's failing to find a brick host, but I can't see any reason for it.
18:54 luckyinva 0002 = .223  0003 =.227 0004=.232
18:54 luckyinva sure
18:56 _Bryan_ joined #gluster
18:57 deeville joined #gluster
19:00 luckyinva ok I see something here i didnt notice before
19:01 luckyinva one of the hostnames in the peer files comes back as an IP
19:01 luckyinva vs hostname
19:01 JoeJulian Yeah, that's a commonly missed configuration step.
19:01 luckyinva not sure why or how that happened though
19:01 JoeJulian @hostnames
19:01 glusterbot JoeJulian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
19:01 JoeJulian ^ the last phrase.
19:01 luckyinva ok
19:02 JoeJulian It's in the documentation, but who reads that stuff..
19:02 deeville I'm getting quite a bit of "Stale file handle" errors in /var/log/glusterfs/glustershd.log. It's actually filling up the log. The Path is <gfid:….>. Has anyone experienced this?
19:02 luckyinva i have read a great deal of it.. especially before coming here
19:03 JoeJulian +1
19:03 ekuric joined #gluster
19:03 JoeJulian deeville: last time I saw it, I had a file system that needed fixed. Check dmesg on your servers.
19:03 luckyinva i thought is was because I found out Im going to be dad this morning.. and lost my mind
19:04 JoeJulian Oh wow! Congratulations?
19:04 luckyinva fortunately its my second one so Im not a freaked out
19:04 luckyinva as*
19:04 luckyinva pretty stoked actually
19:05 JoeJulian I found out I was going to be a Dad for the second time 9 years after a vasectomy so I'm probably much more familiar with losing my mind at that news than most.
19:06 luckyinva damn dude
19:06 luckyinva thats some serious OMG
19:06 JoeJulian yeah
19:07 jbrooks left #gluster
19:13 shubhendu_ joined #gluster
19:15 deeville JoeJulian, hmmm..can't see anything obvious in dmesg. Nothing that says fsck or xfs_check or something to that effect.
19:15 deeville the file system is mounted
19:15 JoeJulian next I would check the brick logs at the same timestamps
19:19 deeville JoeJulian, by the way, in addition to Stale file handle.."Permission denied" is mentioned. I'm thinking it's running into a permissions problem during sync, which has never happened before
19:22 JoeJulian This is like the 4th permission denied thing in as many days. I'm stumped. I have no idea how root can have permissions denied without it being a selinux/apparmor problem.
19:23 deeville I did enable root-squashing on the gluster volume..not sure if that matters
19:24 JoeJulian Try disabling that and see if it changes anything?
19:24 jbrooks joined #gluster
19:25 deeville JoeJulian, it's been running like that for a while though..and this permission denied thing came up the past week. i will try it and see
19:30 chirino joined #gluster
19:30 deeville JoeJulian, yah it's not root-squash
19:31 semiosis iptables
19:31 deeville semiosis, are you suggesting to check iptables?
19:31 deeville let me disable to see
19:31 semiosis more of a joke than a serious suggestion
19:32 semiosis i'd be shocked if that really helped
19:32 deeville by the way, I think it's related to all the entries I see in the heal-failed
19:32 deeville it's all gfid entries though which is worrisome
19:32 semiosis you might be able to use the ,,(gfid resolver) to find out what those are
19:32 glusterbot https://gist.github.com/4392640
19:33 deeville thanks semiosis
19:33 luckyinva @joe thanks for the help today / I was able to append .silver.com to each file and the service started
19:34 JoeJulian interesting
19:36 deeville semiosis, do you know if heal-failed entries will cause the heal/sync to stop between bricks, similar to the way split-brain does?
19:39 Peter1 is that possible to have glusterfs client use fsc?
19:39 Peter1 cachefilesd on ubuntu
19:45 semiosis deeville: dont know
19:45 semiosis afk
19:49 bala joined #gluster
19:53 skippy joined #gluster
19:53 skippy I'm having trouble getting a RHEL 6.5 client to successfully mount a Gluster volume hosted on a RHEL7 server: https://gist.github.com/skpy/510d24a2acb177572262
19:53 glusterbot Title: gist:510d24a2acb177572262 (at gist.github.com)
19:54 skippy using the latest Gluster RPMs from the gluster repo for both system.s
19:54 skippy this seems to be the relevant line from the client mount log: "0-glusterfs: readv on 192.168.30.107:24007 failed (No data available)"
19:55 skippy but I dont really know what that's trying to tell me.
19:55 skippy the RHEL7 server can successfully mount the volume it's hosting, for what that's worth.
20:19 deeville semiosis, JoeJulian server.root-squash DOES cause issues, when I disable it, I get all those metadata self heal going….when I enable, lots of "Stale file handle" and "Permission denied"….so far that's what I'm seeing. Don't really understand why it's like this all of a sudden
20:19 JoeJulian Interesting. Please file a bug report
20:19 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
20:20 dtrainor_ joined #gluster
20:21 deeville JoeJulian, I'll do that
20:21 chirino joined #gluster
20:23 deeville JoeJulian, my glustershd.log is 23Gb which I don't think is normal
20:24 deeville this is only one one node, on the other node, it's the brick log that's 16Gb. it's a 2-node replicated setup
20:25 JoeJulian have you tried restarting it?
20:25 deeville yes :(
20:25 JoeJulian pkill -f glustershd ; service glusterd (or glusterfs-server if you're on ubuntu) restart
20:26 deeville I also rebooted the servers lol
20:28 skippy anyone have any ideas re: RHEL6.5 client unable to talk to RHEL7 server? https://gist.github.com/skpy/510d24a2acb177572262
20:28 glusterbot Title: gist:510d24a2acb177572262 (at gist.github.com)
20:28 JoeJulian iptables, selinux
20:28 skippy sorry, forgot that.  both are disabled.
20:29 deeville speaking of selinux, is permissive ok…or disabled the recommendd
20:30 JoeJulian deeville: recommended is always setenforce 1. Figure out what you need in order to do that and make it so.
20:31 JoeJulian skippy: check /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on gluster1 to see what it says about the same timestamp.
20:32 skippy [2014-08-01 19:43:34.817673] E [rpcsvc.c:620:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
20:32 skippy even though I have allow-insecure on ?
20:32 JoeJulian You're mounting as non-root?
20:33 skippy no, I'm trying as root
20:33 JoeJulian You're mounting through nat?
20:34 skippy I am, actually.  The client is a VirtualBox VM.  Server is physical host.
20:34 JoeJulian booyah. nailed it.
20:34 JoeJulian to allow insecure to glusterd, you need to set that in /etc/glusterfs/glusterd.vol
20:34 skippy i didnt see anyting re: NAT on the docs.  How's this the problem?
20:37 JoeJulian As a legacy thing, a rudimentary form of security gluster implemented early on was only allowing connections from port <=1024. NAT will take your connection and present it from whatever the next available port is on the masquerading ip.
20:38 daMaestro joined #gluster
20:38 skippy do I need to set that option on each server in the storage pool?  Or is setting it once sufficient for the whole pool?
20:42 luckyinva joined #gluster
20:42 skippy need it on all servers.
20:46 andreask joined #gluster
20:46 skippy thanks JoeJulian!
20:47 dtrainor_ joined #gluster
20:50 Maya_ joined #gluster
20:57 skippy left #gluster
21:14 AaronGr joined #gluster
21:40 dtrainor_ joined #gluster
22:11 Eco_ joined #gluster
22:20 MacWinner joined #gluster
22:20 xavih joined #gluster
22:21 MacWinner I have 2 datacenters which each have a replica2 volume across 4 nodes.. (each datacenter has 4 nodes).. I want to use Geo-Replication between them to achieve master-master replication.. Using 3.5.2.  Any gotchas?  Do I run the risk of strange race conditions or circular replication loops?
22:22 JoeJulian Unless I missed a surprise announcement in 3.5.2 (possible though) it still doesn't do multi-master geo-replication.
22:27 MacWinner JoeJulian, oh..  I guess I mean If I setup georeplication on each side pointing to the same folder on the other side..  or should I stick with some sort of rsync solution?
22:29 Ramereth JoeJulian: random question: is it possible to mount a glusterfs volume over a NAT?
22:30 JoeJulian 42
22:30 Ramereth ha, i just read above
22:30 JoeJulian Sorry, random answer didn't match random question. ;)
22:30 JoeJulian MacWinner: nope, that wouldn't work.
22:31 JoeJulian Ramereth: http://paste.openstack.org/​show/LuCfJM5RXe2AAFxBSMtj/
22:31 glusterbot Title: Paste #LuCfJM5RXe2AAFxBSMtj | LodgeIt! (at paste.openstack.org)
22:31 JoeJulian Though, clearly, the allow-insecure for glusterd needs to be added to that post.
22:31 MacWinner JoeJulian, any tips on what do do? getting near real-time syncronization between gluster volumes in 2 datacenters..  was thinking of rsync or unison
22:32 JoeJulian MacWinner: I prefer to study the arcane arts for that purpose. A little wizardry to overcome the speed of light problem...
22:33 JoeJulian But seriously, no. There's no magic bullet.
22:33 Ramereth JoeJulian: do I literally just add that option to the file or can I just add it via a glusterfs command? what's the exact thing I should add?
22:33 JoeJulian You have to add it to the correct translator in the file...
22:34 JoeJulian finding it again...
22:34 Ramereth got an example I can look at?
22:34 Ramereth and what are the implications of enabling this?
22:37 JoeJulian Oh, right.. I remember how easy it is again. :D Just add "option rpc-auth-allow-insecure on" to the management translator.
22:37 JoeJulian It then allows ports outside of 1-1024 to connect to 24007 and interface with the rpc.
22:37 Ramereth any other security implications?
22:37 JoeJulian nope
22:37 Ramereth after I edit the file, do I restart anything?
22:37 JoeJulian glusterd
22:38 JoeJulian theoretically a HUP would do it, but I'd just restart it.
22:38 Ramereth ok, if it breaks i'm blaming you ;)
22:38 JoeJulian hehe, ok
22:39 Ramereth living life on the edge... making the change on prod!
22:39 JoeJulian You can come up here and find me...
22:39 Ramereth its a bit of a drive...
22:39 JoeJulian The weather's nice.
22:39 Ramereth but a weekend in Seattle sounds nice. Probably cooler
22:39 Ramereth you buying beers?
22:39 JoeJulian I'll be up on Whidby Island on Saturday.
22:40 Ramereth nice
22:41 Ramereth \o/ works
22:41 Ramereth i owe you a beer
22:41 * JoeJulian adds that to his tally and wonders if he could open a bar...
22:42 JoeJulian If I ever take a trip around the world collecting all the beers I've been promised, I won't remember it.
22:42 Ramereth you'd probably get alcohol poisoning
22:45 MacWinner JoeJulian, quick question.. I just did a chown operation on a bunch of files in a gluster volume that's mounted via fuse.. now the file counts in the mount are different between 2 nodes that are part of the volume..
22:46 MacWinner JoeJulian, is there some simple gluster command to see the differential?
22:54 qdk joined #gluster
22:54 julim joined #gluster
23:04 DV joined #gluster
23:06 JoeJulian Probably touched a file that hashes to a different dht subvolume than it's actually on. That would create a dht.linkto file which would throw the count off.
23:07 JoeJulian @lucky dht hash misses are expensive
23:07 glusterbot JoeJulian: http://joejulian.name/blog​/dht-misses-are-expensive/
23:07 JoeJulian MacWinner: ^ That link explains dht.
23:08 MacWinner JoeJulian, just realized it was my error.. I misconfigured one of the mounts to point to a remote volume
23:29 mkzero joined #gluster
23:33 hagarth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary