Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:08 illogik_ joined #gluster
00:16 ira joined #gluster
00:16 Logos01 JoeJulian: ... at the very least this means I now get to rebuild several of my servers onto el7.x
00:17 Logos01 And with my spiffy server hardening profile, as opposed to what mess my predecessor left me with...
00:17 JoeJulian Oh, nice. That sounds like a silver lining.
00:25 Rapture joined #gluster
00:25 baojg joined #gluster
00:30 calavera joined #gluster
00:30 monotek1 joined #gluster
00:35 Logos01 JoeJulian: Earlier, you said that /var/log/glusterfs is client only.
00:35 Logos01 But I'm seeing log messages populated there even when there are no clients mounted at all.
00:35 JoeJulian No, /var/log/glusterfs/glustershd.log is the self-heal daemon. It *is* a client as far as the way it's implemented.
00:36 Logos01 Ahh. Okay.
00:36 JoeJulian Same with nfs.log
00:36 Logos01 Yeah ... I'm *hoping* that the brick disconnect thing is what's causing the self-heal stuff to die as well.
00:36 Logos01 I'm basically stuck having to destroy and rebuild all of my gluster volumes ... again.
00:37 Logos01 (Thankfully this is only transitory data so it's not the end of the world)
00:46 baojg joined #gluster
00:48 jbrooks joined #gluster
01:01 haomaiwang joined #gluster
01:09 plarsen joined #gluster
01:23 gildub joined #gluster
01:24 Lee1092 joined #gluster
01:33 EinstCrazy joined #gluster
01:36 cyberbootje joined #gluster
02:01 liewegas joined #gluster
02:06 baojg joined #gluster
02:10 EinstCrazy joined #gluster
02:24 haomaiwa_ joined #gluster
02:28 chirino joined #gluster
02:31 EinstCrazy joined #gluster
02:32 nangthang joined #gluster
02:39 illogik joined #gluster
02:40 Rapture joined #gluster
03:00 EinstCrazy joined #gluster
03:01 haomaiwa_ joined #gluster
03:04 calavera joined #gluster
03:09 gildub joined #gluster
03:29 nehar joined #gluster
03:35 ovaistariq joined #gluster
03:36 ovaistar_ joined #gluster
03:41 gem joined #gluster
03:42 Humble joined #gluster
03:44 hchiramm joined #gluster
03:45 baojg joined #gluster
03:47 nbalacha joined #gluster
04:00 spalai joined #gluster
04:01 haomaiwang joined #gluster
04:05 enoch joined #gluster
04:16 ramteid joined #gluster
04:35 JesperA joined #gluster
04:40 monotek joined #gluster
04:56 arcolife joined #gluster
05:01 haomaiwa_ joined #gluster
05:10 overclk joined #gluster
05:23 nishanth joined #gluster
05:26 zhangjn joined #gluster
05:28 enoch joined #gluster
05:30 nbalacha joined #gluster
05:37 Manikandan joined #gluster
05:41 EinstCrazy joined #gluster
05:45 spalai joined #gluster
05:46 vimal joined #gluster
05:47 monotek joined #gluster
05:47 zhangjn joined #gluster
06:01 haomaiwang joined #gluster
06:21 spalai joined #gluster
06:24 zhangjn joined #gluster
06:25 zhangjn joined #gluster
06:26 EinstCrazy joined #gluster
06:40 zhangjn joined #gluster
06:41 EinstCrazy joined #gluster
06:55 enoch joined #gluster
06:59 unlaudable joined #gluster
07:01 haomaiwa_ joined #gluster
07:01 aravindavk joined #gluster
07:02 ovaistar_ joined #gluster
07:02 calavera joined #gluster
07:05 gem joined #gluster
07:09 SOLDIERz joined #gluster
07:12 ekuric joined #gluster
07:25 jtux joined #gluster
07:37 aravindavk joined #gluster
07:39 nangthang joined #gluster
07:41 [Enrico] joined #gluster
07:47 mhulsman joined #gluster
07:49 ccha joined #gluster
07:59 bluenemo joined #gluster
08:00 EinstCrazy joined #gluster
08:01 haomaiwa_ joined #gluster
08:09 zhangjn joined #gluster
08:13 mobaer joined #gluster
08:13 EinstCra_ joined #gluster
08:16 badone joined #gluster
08:23 ivan_rossi joined #gluster
08:23 Manikandan joined #gluster
08:29 fsimonce joined #gluster
08:40 deniszh joined #gluster
08:48 natarej_ joined #gluster
08:49 spalai joined #gluster
08:57 b0p joined #gluster
09:01 harish joined #gluster
09:01 haomaiwang joined #gluster
09:19 spalai joined #gluster
09:32 Slashman joined #gluster
09:33 Apeksha joined #gluster
09:59 zhangjn joined #gluster
10:01 haomaiwa_ joined #gluster
10:16 csaba joined #gluster
10:20 ahino joined #gluster
10:33 NuxRo joined #gluster
10:38 EinstCrazy joined #gluster
10:55 mhulsman joined #gluster
11:01 haomaiwa_ joined #gluster
11:03 skoduri joined #gluster
11:08 sc0 joined #gluster
11:15 mhulsman joined #gluster
11:28 nangthang joined #gluster
11:41 zhangjn joined #gluster
11:51 nottc joined #gluster
11:56 [Enrico] joined #gluster
12:01 haomaiwang joined #gluster
12:01 skoduri joined #gluster
12:06 nbalacha joined #gluster
12:07 MessedUpHare joined #gluster
12:26 unlaudable joined #gluster
12:27 ira joined #gluster
12:31 zhangjn joined #gluster
12:32 zhangjn joined #gluster
12:33 zhangjn joined #gluster
12:34 arcolife joined #gluster
12:34 zhangjn joined #gluster
12:35 Thomasx joined #gluster
12:35 zhangjn joined #gluster
12:38 Thomasx Hello together, i have a questen to glusterfs: I would like to build with glusterfs a stripe Cluster with 2 server. over that striped cluster i would like to bild a geo-redundancy with an other glusterfs that also contains a stripe cluster with 2 server. Is this possible? if yes can someone tell me howto set up this
12:40 zhangjn joined #gluster
12:43 skoduri joined #gluster
12:58 DV joined #gluster
13:01 haomaiwa_ joined #gluster
13:01 shubhendu joined #gluster
13:03 doekia joined #gluster
13:07 chirino_m joined #gluster
13:11 spalai left #gluster
13:17 ekuric joined #gluster
13:30 doekia joined #gluster
13:47 unclemarc joined #gluster
13:50 julim joined #gluster
13:59 ahino1 joined #gluster
14:00 atinm joined #gluster
14:10 haomaiwa_ joined #gluster
14:15 doekia joined #gluster
14:21 raghu joined #gluster
14:22 anti[Enrico] joined #gluster
14:30 zhangjn joined #gluster
14:31 zhangjn joined #gluster
14:33 mobaer joined #gluster
14:35 doekia joined #gluster
14:35 sc0 joined #gluster
14:35 NTmatter joined #gluster
14:47 unclemarc joined #gluster
14:47 ahino joined #gluster
14:52 shyam joined #gluster
15:01 spalai joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 drankis joined #gluster
15:02 hamiller joined #gluster
15:04 doekia joined #gluster
15:08 zhangjn joined #gluster
15:10 skylar joined #gluster
15:19 farhorizon joined #gluster
15:19 aravindavk joined #gluster
15:25 bowhunter joined #gluster
15:36 spalai left #gluster
15:40 plarsen joined #gluster
15:40 aravindavk joined #gluster
15:40 farhorizon joined #gluster
15:43 unclemarc joined #gluster
15:49 emptydir joined #gluster
15:50 emptydir Hi, I am getting this: E [MSGID: 100007] [glusterfsd.c:544:create_fuse_mount] 0-glusterfsd: Not a client process, not performing mount operation
15:50 emptydir What is the issue?
15:57 cholcombe joined #gluster
16:01 haomaiwang joined #gluster
16:16 Peppard joined #gluster
16:20 spalai joined #gluster
16:29 wushudoin joined #gluster
16:43 luis_silva joined #gluster
16:43 spalai left #gluster
16:46 spalai joined #gluster
16:57 bfm joined #gluster
17:01 haomaiwa_ joined #gluster
17:01 rideh joined #gluster
17:02 julim_ joined #gluster
17:03 atinm joined #gluster
17:04 EinstCra_ joined #gluster
17:05 wushudoin| joined #gluster
17:06 Nuxr0 joined #gluster
17:08 hchiramm joined #gluster
17:08 DV joined #gluster
17:09 unclemarc joined #gluster
17:09 plarsen joined #gluster
17:16 vimal joined #gluster
17:26 bennyturns joined #gluster
17:31 PaulePanter joined #gluster
17:37 calavera joined #gluster
18:00 julim joined #gluster
18:01 haomaiwa_ joined #gluster
18:06 shaunm joined #gluster
18:09 Manikandan joined #gluster
18:13 ahino joined #gluster
18:16 cpetersen_ hey @JoeJulian
18:16 cpetersen_ or anyone for that matter
18:16 ovaistariq joined #gluster
18:17 cpetersen_ I have a question about my gluster and ganesha nfs setup
18:21 F2Knight joined #gluster
18:22 doekia joined #gluster
18:30 nickage_ joined #gluster
18:44 arcolife joined #gluster
18:46 mhulsman joined #gluster
18:50 F2Knight joined #gluster
18:53 luis_silva joined #gluster
18:53 F2Knight joined #gluster
18:59 chirino joined #gluster
19:01 JoeJulian Man, I widh emptydir hadn't gone away. I bet he created a client vol file and replaced /etc/glusterfs/glusterd.vol.
19:01 haomaiwa_ joined #gluster
19:01 JoeJulian cpetersen_: Ask it! :D
19:07 cpetersen_ I am attempting to follow the instructions on the RedHat documentation here:
19:07 cpetersen_ https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/sect-NFS.html
19:07 glusterbot Title: 7.2. NFS (at access.redhat.com)
19:07 cpetersen_ There is a part that notes, "Create multiple virtual IPs (VIPs) on the network for each of the servers configured in the ganesha-ha.conf file and assign them to any unused NIC."
19:07 cpetersen_ I have a single NIC on each node so I added aliases for each.
19:07 cpetersen_ ie, ifcfg-eth0:0
19:08 cpetersen_ That is a pre-requisite for using nfs-ganesha.
19:08 cpetersen_ A. is this proper?
19:09 cpetersen_ B. does the corosync/pcsd/pacemaker cluster get created automatically with the ganesha-ha.sh script?
19:11 cpetersen_ Sorry I don't mean to ask question about the cluster software, I know that's not what the focus of this forum is!
19:11 JoeJulian I'm not entirely sure. I use ganesha and ucarp which I configured manually.
19:12 JoeJulian And the question is perfectly germaine, I just haven't done that yet. It's on my to-do list for this week though.
19:13 cpetersen_ On a scale of 1-10, how difficult and reliable is ucarp compared to the alternative?
19:13 cpetersen_ Is that something I should look at?
19:14 JoeJulian difficulty, 1. reliable, I'm leaning toward < 5 right now. I don't think I'm doing anything wrong, but the floating ip never seems to migrate back to the primary.
19:16 cpetersen_ I find it odd that the instructions say to setup virtual IPs for every node...
19:16 cpetersen_ Why isn't there just a flaoting IP?
19:16 JoeJulian It does seem odd.
19:16 cpetersen_ Maybe I'm not understanding correctly.
19:16 JoeJulian Perhaps it's for load balancing?
19:17 cpetersen_ interesting
19:17 JoeJulian I know Facebook does multiple floating IPs per head. This allows failover to balance load to prevent hot spots.
19:23 cpetersen_ Can you tell me which log I would need to look at if my gluster volume fails to start?
19:24 JoeJulian probably one of the /var/log/glusterfs/bricks/* or else one of the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
19:25 mhulsman joined #gluster
19:28 cpetersen_ Aha, thank you sir.
19:32 nickage_ joined #gluster
19:40 cpetersen_ Ugh.
19:40 cpetersen_ [2016-01-26 19:38:30.796314] E [socket.c:769:__socket_server_bind] 0-socket.glusterfsd: binding to  failed: Permission denied
19:41 nickage_ getting:mount.nfs: access denied by server while mounting , while trying to mount cluster as nfs, any ideas why?
19:41 cpetersen_ I'm not sure why it's not working.  The only thing I did differently was the interface virtual ip.
19:41 JoeJulian nickage_: usually it's because the kernel nfs was running when the volume was started.
19:44 nickage_ JoeJulian: what would be the fix for it? Should I stop it ?
19:46 JoeJulian Yes, the kernel NFS cannot be running or registered with rpcbind. Then you can restart the gluster nfs daemon with "gluster volume start $volname force"
19:47 cpetersen_ force worked for me as well
19:48 nickage_ is there a way to check if cluster nfs daemon is running after restart ?
19:48 nickage_ *gluster
19:48 nickage_ is it separate process
19:49 JoeJulian It is. Best is to check the logs in /var/log/glusterfs/nfs.log
19:50 nickage_ yeh, I think I found it
19:51 cpetersen_ nope I have a port blocked somehow
19:51 cpetersen_ [2016-01-26 19:47:32.712301] E [socket.c:769:__socket_server_bind] 0-socket.glusterfsd: binding to  failed: Permission denied
19:51 cpetersen_ [2016-01-26 19:47:32.712345] W [rpcsvc.c:1604:rpcsvc_transport_create] 0-rpc-service: listening on transport failed
19:51 cpetersen_ ugh
19:51 nickage_ oh, that worked!!!!
19:52 nickage_ you guys are awesome
19:52 JoeJulian Thanks. :)
19:52 neofob joined #gluster
19:53 klaxa joined #gluster
19:55 farhorizon joined #gluster
20:01 haomaiwa_ joined #gluster
20:07 spalai left #gluster
20:09 JoeJulian That's f'ing useless: gf_log (this->name, GF_LOG_ERROR, "binding to %s failed: %s", this->myinfo.identifier, strerror (errno));
20:09 JoeJulian Since this is what's failing: bind (priv->sock, (struct sockaddr *)&this->myinfo.sockaddr, this->myinfo.sockaddr_len)
20:11 JoeJulian Hrm, where did cpetersen go?
20:12 JoeJulian I hate working on finding an answer just to have the recipient of that answer dissapear.
20:12 JoeJulian *disappear.
20:18 mhulsman joined #gluster
20:22 gildub joined #gluster
20:30 mowntan joined #gluster
20:39 om joined #gluster
20:39 calavera joined #gluster
20:41 mhulsman joined #gluster
20:43 cpetersen joined #gluster
20:46 cpetersen_ joined #gluster
20:46 MessedUpHare joined #gluster
20:47 MessedUpHare 'ello everyone
20:47 MessedUpHare I've been trying to look around for this, but google has shown me nothing..
20:47 JoeJulian Google usually shows me way more than I wanted to see.
20:48 MessedUpHare but i'm seeing if anyone has discussed backing glusterfs with lto as a tier 2 storage for the tiering feature
20:49 MessedUpHare If not, were do I start if I want to try to hack this functionality in? either as geo replication or a slower tier?
20:49 dthrvr joined #gluster
20:50 JoeJulian linear tape? Interesting.
20:50 JoeJulian I haven't heard of anything.
20:51 MessedUpHare Yeah, so currently, I have a customer who has large single asset files
20:51 MessedUpHare video files
20:51 MessedUpHare and quite often, they aren't touched for months at a time
20:51 MessedUpHare I was wondering if I can put gluster on the case to drop cold files off to cold storage..
20:52 JoeJulian Can LTO be mounted as a posix filesystem?
20:53 MessedUpHare LTFS can
20:53 MessedUpHare (LTFS 5 + _
20:53 MessedUpHare )
20:53 JoeJulian If it's posix and supports extended attributes, the recently added tiering capability should work beautifully for that.
20:54 MessedUpHare I think this could be a good feature for this use case. Is there a way I could control the backend of it (I need to use scsi commands to mount the tape, then the filesystem)
20:55 MessedUpHare or  way I can place "stub" (not sure what they are really called) files that result in a call to my middleware?
21:01 haomaiwa_ joined #gluster
21:02 JoeJulian MessedUpHare: I'm not sure if it's what you're looking for, but look at the hooks directories under /var/lib/glusterd.
21:03 MessedUpHare JoeJulian: I guess i'm not 100% sure yet, I'd like single assets which appear and stabilise in a given volume to be copied onto lto... then
21:04 JoeJulian Look at how tiered storage works (I haven't even really looked at that yet, myself) http://gluster.readthedocs.org/en/release-3.7.0beta1/Features/tier/
21:04 glusterbot Title: Tier - Gluster (at gluster.readthedocs.org)
21:05 MessedUpHare after a period of disuse (or perhaps another API call) they are removed from spinning disk - but a dummy file is left in the volume that causes a copy back from tape when file operations occur on it?
21:05 JoeJulian http://www.slideshare.net/JosephFernandes9/gluster-data-tiering
21:05 glusterbot Title: Gluster Data Tiering (at www.slideshare.net)
21:06 MessedUpHare JoeJulian: Thanks for the tips, I need to figure out where on earth to start with all this
21:06 JoeJulian Yep, that's always the fun part.
21:09 farhoriz_ joined #gluster
21:12 mhulsman joined #gluster
21:32 EinstCrazy joined #gluster
21:39 JoeJulian @ppa
21:39 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
21:42 deniszh joined #gluster
21:52 post-factum joined #gluster
21:55 MessedUpHare joined #gluster
21:56 MessedUpHare joined #gluster
22:00 calavera joined #gluster
22:01 17WABNN7F joined #gluster
22:05 farhorizon joined #gluster
22:06 MessedUpHare joined #gluster
22:10 nickage_ joined #gluster
22:11 ira joined #gluster
22:14 JoeJulian hagarth: Upgrading production from 3.4.4 to 3.6.8. So far so good except, and this probably makes sense but I'll ask anyway, the 3.6 shd is in a loop: [2016-01-26 22:10:09.344777] W [rpc-clnt-ping.c:145:rpc_clnt_ping_cbk] 0-gv-nova-client-2: socket or ib related error
22:14 JoeJulian I've only upgraded one server.
22:17 F2Knight joined #gluster
22:18 Liquid-- joined #gluster
22:18 post-factum JoeJulian: mixing different glusterfs branches is basically bad idea leading to unexpected fun with broken cluster
22:21 JoeJulian post-factum: Yeah, I've done this once or twice.
22:22 JoeJulian But from time-to-time I like to run things by my friend to make sure things are going the way he also expects them to go.
22:24 nickage_ so if for some other reason brick is not starting anymore, can i just replace it, with the empty one, will it then be resynced with other that have data ?
22:24 nickage_ in replicated brick setup I mean
22:25 JoeJulian nickage_: Yes, but the volume-id will be missing, of course, from the new brick. You can ensure the new brick is mounted and the brick root directory exists, then start it with "gluster volume start $volname force" to have that recreated.
22:29 JoeJulian post-factum: Oh, that's you!
22:30 nickage_ ok, gone try it on my test cluster setup
22:30 JoeJulian post-factum: Thanks for finding those leaks. That's going to save me so much headache.
22:31 post-factum JoeJulian: leaks are not gone yet :(
22:31 JoeJulian I know, but you're close.
22:32 post-factum JoeJulian: it seems there are plenty of them, unrelated one to each other
22:32 post-factum JoeJulian: I'm trying hard to make things work for us. we have damned mail storage with several millions of files, and that leaks a lot
22:32 JoeJulian I'm pretty sure almost all of the guys you've been dealing with are in Bangalore so they're not going to be online for at least 6 hours.
22:33 neofob left #gluster
22:34 JoeJulian I think some of the leaks you're finding are ones I was trying to get quashed back in 3.4 (before my employer lost patience and got me busy doing other things).
22:34 post-factum not a problem, will chat with them later if necessary. rsync test under valgrind is much slower than w/o valgrind, so the final results are going to emerge in a week or so
22:34 JoeJulian The memory allocation has changed significantly since then, making these much easier to find now.
22:35 post-factum i guess i face those leaks as i have corner-case volume with lots of files
22:35 shyam joined #gluster
22:35 post-factum and that is the reason i've almost deployed cephfs, but it failed :)
22:35 om joined #gluster
22:36 JoeJulian Well, IO had me deploy cephfs. Now we're going back to Gluster, fwiw.
22:36 JoeJulian Nothing particularly wrong with ceph, but cephfs is not ready.
22:37 post-factum yeah, unfortunately, with ceph rbd being ready for production and used by us for vm storage, cephfs is not that ready
22:37 MessedUpHare joined #gluster
22:37 post-factum it simply fails to throttle clients, especially having journals and metadata on ssds and main capacity on hdds
22:37 post-factum iow, cephfs trusts its clients too much :)
22:38 JoeJulian Hey, what part of the world are you in?
22:38 post-factum Ukraine. you?
22:38 JoeJulian Seattle
22:38 post-factum too far from me :)
22:38 JoeJulian Guess you don't get out to many conferences...
22:38 post-factum mmm?
22:38 JoeJulian I'm going to be speaking in Italy in April.
22:39 post-factum oh, that. subject?
22:39 JoeJulian No, I'll be doing salt deployment of clustered systemd.
22:39 JoeJulian But Ceph and Gluster are my examples.
22:40 post-factum i see. mainly, i see conference videos "post factum" on youtube :)
22:41 post-factum i'd listen to cephfs deployment success story if any
22:43 cpetersen_ joined #gluster
22:43 post-factum btw, do you use glusterfs shards?
22:44 post-factum i've faced unfair bricks load disbalance with large single-tar backups, and going to try shards to distribute large files better
22:44 JoeJulian No, I'm a strong believer of keeping files whole whenever possible. It's just the idea of having that final failsafe where, in a disaster situtation, you can send that drive off to a clean room and recover the client's data.
22:45 post-factum that is strong reason, but we are going to keep backups in replica 2 anyway
22:45 JoeJulian That does sound like a good reason to use shards though.
22:46 JoeJulian We do replica 3.
22:46 JoeJulian But our SLA is 6 nines.
22:46 post-factum in fact we do replica 4? but with many raid-1 within one node and replica 2 between nodes
22:47 post-factum i'd be glad to make replica 3, but we have only 2 DC for rack placement
22:47 post-factum wish it could be 3 DC
22:47 post-factum the question is whether shards are production-ready as of 3.7.6
22:48 JoeJulian I think I read an email thread that would make me wait for the next point release.
22:49 hagarth JoeJulian: do you have the complete shd log?
22:50 post-factum JoeJulian: what thread?
22:53 JoeJulian hagarth: I do, but it grew to 100mb pretty damned quickly. Those lines were coming within milliseconds of each other. Seems to have slowed down though. Interesting.
22:54 JoeJulian post-factum: I'd have to find it, but I can't do that now. I'm in the middle of this upgrade.
22:54 JoeJulian The brick it was complaining about was simultaneously throwing [2016-01-26 22:53:30.250268] E [rpcsvc.c:195:rpcsvc_program_actor] 0-rpc-service: RPC Program procedure not available for procedure 2 in GF-DUMP
22:55 JoeJulian looks like the heals happened, though, so panic mode is over. :)
22:59 hagarth JoeJulian: ok, let me know if you still continue to see those logs
23:01 gbox joined #gluster
23:01 haomaiwa_ joined #gluster
23:06 nathwill joined #gluster
23:10 calavera joined #gluster
23:12 calavera joined #gluster
23:47 plarsen joined #gluster
23:47 cpetersen_ I still can't figure out why I can't start my gluster volume.
23:47 cpetersen_ [2016-01-26 19:47:32.712301] E [socket.c:769:__socket_server_bind] 0-socket.glusterfsd: binding to  failed: Permission denied
23:47 cpetersen_ [2016-01-26 19:47:32.712345] W [rpcsvc.c:1604:rpcsvc_transport_create] 0-rpc-service: listening on transport failed
23:48 JoeJulian Which log file is that?
23:48 cpetersen_ the brick
23:48 JoeJulian Oh, right... I was helping you and you disappeared. :P
23:49 cpetersen_ Ah you were helping MessedUpHare, then I had to get to an appointment.  :)
23:49 cpetersen_ Sorry, I probably should have said something!
23:49 JoeJulian Short of an strace, I don't know how to find that it's actually trying to do. That log message references structure data that's not even used in the command that fails.
23:50 cpetersen_ Damn, I probably need to revert to a snapshot then...
23:50 JoeJulian s/that/what
23:50 cpetersen_ The only thing I really changed was adding another network adapter.
23:50 JoeJulian Which reminds me, I should file a bug report
23:50 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
23:51 cpetersen_ :)
23:54 cpetersen_ This is another part that was missing.
23:54 cpetersen_ [2016-01-26 19:50:09.053020] I [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.7.6 (args: /usr/sbin/glusterfsd -s file01 --volfile-id gluster_shared_storage.file01.var-run-gluster-shared_storage -p /var/lib/glusterd/vols/gluster_shared_storage/run/file01-var-run-gluster-shared_storage.pid -S /var/run/gluster/9d0ab27de91760f3b950ee19c0f00cea.socket --brick-name /var/run/gluster/shared_stor
23:54 cpetersen_ age -l /var/log/glusterfs/bricks/var-run-gluster-shared_storage.log --xlator-option *-posix.glusterd-uuid=ad1388cd-c085-4e1f-8f71-0c543a3c3b52 --brick-port 49155 --xlator-option gluster_shared_storage-server.listen-port=49155)
23:55 JoeJulian cpetersen_: did I ask about selinux already?
23:55 cpetersen_ each time I try to start it the log has a new port number, so it's dynamic
23:55 JoeJulian Yes
23:55 cpetersen_ selinux is set to enforcing
23:55 JoeJulian Let's make it permissive and see if it works.
23:56 JoeJulian Just don't tell mhayden I said that.
23:59 cpetersen_ after changing the config do I really need to reboot or is there a service I can cycle?
23:59 JoeJulian setenforce 0
23:59 JoeJulian That's all you actually have to do.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary