Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 kr0w Mmm, I would rather not
00:05 kr0w I think downgrading is the best option.. yum isn't liking it though
00:05 CyrilPeponnet seriously 3.6 or 7 is way better than 3.4
00:07 kr0w Really?
00:08 kr0w Hmm
00:08 CyrilPeponnet you should give a try with nfs-ganesha I think
00:08 CyrilPeponnet failover with a vip is transparent for clients
00:09 CyrilPeponnet no downtime
00:09 CyrilPeponnet the ip is the same, and the backend is handled by gluster
00:10 kr0w I have never set it up before do you have a url for configuring vip in this environment?
00:11 kr0w I am just on a local network
00:11 kr0w No local dns is used.
00:11 CyrilPeponnet you don't need it
00:11 CyrilPeponnet are you using puppet ?
00:12 kr0w No
00:12 CyrilPeponnet https://raymii.org/s/tutorials/Keepali​ved-Simple-IP-failover-on-Ubuntu.html
00:12 glusterbot Title: Simple keepalived failover setup on Ubuntu 14.04 - Raymii.org (at raymii.org)
00:12 CyrilPeponnet basically you will need a third ip for the vip
00:12 CyrilPeponnet and keepalived
00:12 CyrilPeponnet that's it
00:12 kr0w Ah
00:13 kr0w Ok, I will give a shot, thanks!
00:16 gildub joined #gluster
00:24 kr0w Hmm
00:24 kr0w I am close. The virtual ip works on the main one, but when it goes down the other doesn't pick up
00:27 AceFacee joined #gluster
00:27 kr0w Nice, only 747ms downtime to switch
00:27 AceFacee hello all!
00:27 kr0w CyrilPeponnet: Thanks I will go with that.
00:28 jobewan joined #gluster
00:29 julim joined #gluster
00:32 kr0w Hmm
00:32 kr0w the nfs client isn't mounting the volume
00:35 AceFacee Hi All, I have an "Input/output" error when I try to use dd on a mounted glusterfs volume, see here: http://pastebin.com/14uX2zgV
00:35 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
00:35 AceFacee What would be my first things to troubleshoot?
00:36 AceFacee new paste here: http://fpaste.org/275140/14440917/
00:36 glusterbot Title: #275140 Fedora Project Pastebin (at fpaste.org)
00:38 nangthang joined #gluster
00:53 lcurtis_ joined #gluster
00:57 kr0w Anyone here to answer this? From the documentation it looks like I should just be able to use nfs, but it just hangs when trying to mount.
00:58 kr0w Even when trying to mount on the server running gluster
00:59 AceFacee kr0w: you might have to specify nfs version 3
00:59 AceFacee when mounting
00:59 AceFacee (just a guess)
01:01 kr0w I found rpcbind wasn't started (I didn't see that in the docs but I did in someones troubleshooting)
01:02 kr0w Yup, now it works like a champ
01:02 kr0w Thanks AceFacee
01:09 vikumar joined #gluster
01:10 ChrisHolcombe joined #gluster
01:32 Lee1092 joined #gluster
01:48 neha_ joined #gluster
02:22 jobewan joined #gluster
02:25 gildub joined #gluster
02:32 JoeJulian AceFacee++
02:32 glusterbot JoeJulian: AceFacee's karma is now 1
02:33 JoeJulian AceFacee check "gluster volume heal DSR info"
02:33 JoeJulian it sounds like split-brain.
02:34 JoeJulian @split-brain
02:34 glusterbot JoeJulian: To heal split-brains, see https://gluster.readthedocs.org/en/release-3.7.0​/Features/heal-info-and-split-brain-resolution/ For additional information, see this older article https://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ Also see splitmount https://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/
02:50 dlambrig_ left #gluster
03:09 anoopcs joined #gluster
03:16 haomaiwa_ joined #gluster
03:17 harish joined #gluster
03:24 clutchk joined #gluster
03:30 atinm joined #gluster
03:33 vimal joined #gluster
03:35 vmallika joined #gluster
03:36 nishanth joined #gluster
03:38 TheSeven joined #gluster
03:39 stickyboy joined #gluster
03:42 haomaiwa_ joined #gluster
03:48 kanagaraj joined #gluster
03:49 shubhendu joined #gluster
03:53 bharata-rao joined #gluster
03:55 asrivast joined #gluster
03:59 ramteid joined #gluster
04:01 17SADQY7J joined #gluster
04:06 nbalacha joined #gluster
04:10 nbalacha joined #gluster
04:11 nbalacha joined #gluster
04:29 harish joined #gluster
04:31 yazhini joined #gluster
04:32 sakshi joined #gluster
04:35 rafi joined #gluster
04:40 kotreshhr joined #gluster
04:45 auzty joined #gluster
04:49 jobewan joined #gluster
04:56 jockek joined #gluster
04:57 Manikandan joined #gluster
04:58 jtux joined #gluster
04:59 ndarshan joined #gluster
05:00 kr0w joined #gluster
05:00 kr0w Anyone available
05:00 kr0w ?
05:00 maveric_amitc_ joined #gluster
05:01 haomaiwa_ joined #gluster
05:01 ashiq joined #gluster
05:02 rafi kr0w: hey
05:02 ppai joined #gluster
05:03 paescuj joined #gluster
05:05 csim joined #gluster
05:05 chirino joined #gluster
05:07 rideh joined #gluster
05:08 shortdudey123 joined #gluster
05:08 wnlx joined #gluster
05:09 _fortis joined #gluster
05:09 pppp joined #gluster
05:09 jiqiren joined #gluster
05:10 vimal joined #gluster
05:10 gem joined #gluster
05:10 jiffin joined #gluster
05:18 kr0w rafi: Hi!
05:18 kr0w I was beginning to think I should try again in the morning. :D
05:19 rafi kr0w: Do you still think ? :)
05:20 kr0w So I am hoping someone here can help me with performance issues. Or maybe it is correct. I can get write speeds on the local glusterfs server of 1 GB/s.
05:20 rafi kr0w:
05:20 rafi kr0w: oke
05:20 kr0w The two servers are identical. And they are connected over a bonded 40Gb/s link
05:21 kr0w 2 links I should say, giving a theoretical throughput of 80Gb/s on the network
05:21 kr0w When I write to the glusterfs mounted volume I get 500MB/s
05:21 kr0w That is half of 1 GB/s and the volume is a replica 2
05:22 jtux joined #gluster
05:22 kr0w Is that what I should expect?
05:23 rafi kr0w: I guess so, because glusterfs start writing to replica's, so you should theoretically get half
05:24 kr0w I was hoping to get the full speed since they are able to write at 1GB/s each. I figured even though it duplicates the data the data can be written at the same time with 1GB/s..
05:25 kdhananjay joined #gluster
05:25 kr0w But I guess here is the big question I came here for. Sometimes the speed drops down to 300 MB/s when I know I am the only one using it(We are still testing to see if we will use it in production) And I can't figure out why.
05:25 rafi bennyturns: ^
05:26 rafi rastar: ^
05:26 kr0w Thanks rafi. I can just come back in the morning
05:26 rafi kr0w: let me ask to the performance experts
05:26 kr0w Ok
05:27 maveric_amitc_ joined #gluster
05:27 rafi kr0w: mean time if you are stuck, you can drop a mail to gluster-users and glusterd-devel mailig list
05:27 jatb joined #gluster
05:28 rafi kr0w: I'm sure most of the performance experts can help you there
05:28 kr0w Ok, I know the answer will probably be that I need to test throughput and other things. But I haven't found a good way to do that. I have just been using dd to test the write speeds so far
05:28 hgowtham joined #gluster
05:29 rjoseph joined #gluster
05:29 kr0w I should probably get iozone installed and try that. So I will do that and check back tomorrow
05:29 rafi kr0w: sure
05:30 kr0w Thanks
05:38 F2Knight joined #gluster
05:39 msciciel1 joined #gluster
05:40 kovshenin joined #gluster
05:41 hagarth joined #gluster
05:44 arcolife joined #gluster
05:46 neha_ joined #gluster
05:52 doekia joined #gluster
05:53 aravindavk joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 Bhaskarakiran joined #gluster
06:03 dusmant joined #gluster
06:03 hagarth joined #gluster
06:06 karnan joined #gluster
06:07 kshlm joined #gluster
06:08 bennyturns joined #gluster
06:11 ndarshan joined #gluster
06:12 mhulsman joined #gluster
06:17 kayn joined #gluster
06:28 raghu joined #gluster
06:32 anil joined #gluster
06:42 Apeksha joined #gluster
06:43 arcolife joined #gluster
06:43 ramky joined #gluster
06:43 nangthang joined #gluster
06:45 ndarshan joined #gluster
06:50 pg joined #gluster
06:52 nbalacha joined #gluster
06:54 vmallika joined #gluster
07:01 atalur joined #gluster
07:04 asrivast joined #gluster
07:15 pg joined #gluster
07:17 [Enrico] joined #gluster
07:19 Bhaskarakiran joined #gluster
07:20 d-fence joined #gluster
07:20 nbalacha joined #gluster
07:30 Simmo If at certain point in time a partition size (mounted as brick) needs to be increased then which would be the logical steps to execute ? :-/
07:30 nbalacha joined #gluster
07:31 Raide joined #gluster
07:31 nbalacha joined #gluster
07:34 livelace joined #gluster
07:37 Simmo When possible,  is it preferred to have less volume/brick (e.g. 1) ? For example, if possible it is better to have 1 brick (1 volume) instead of 2 (2 partitions -> 2 volumes -> 2 bricks) ?
07:37 Simmo (ehm maybe better saying 2 partitions -> 2 bricks -> 2 volumes)
07:44 mufa joined #gluster
07:50 fsimonce joined #gluster
07:53 LebedevRI joined #gluster
07:58 pg joined #gluster
08:13 kovshenin joined #gluster
08:13 kovsheni_ joined #gluster
08:18 LebedevRI joined #gluster
08:18 jwd joined #gluster
08:24 Slashman joined #gluster
08:32 ctria joined #gluster
08:32 Pupeno joined #gluster
08:48 R0ok_ joined #gluster
08:50 asrivast left #gluster
08:57 nishanth joined #gluster
09:06 _shaps_ joined #gluster
09:08 harish_ joined #gluster
09:08 R0ok_ joined #gluster
09:11 askb joined #gluster
09:19 Manikandan joined #gluster
09:20 nbalacha joined #gluster
09:21 pg joined #gluster
09:24 atalur joined #gluster
09:25 rjoseph joined #gluster
09:29 nbalacha joined #gluster
09:38 gem joined #gluster
09:39 stickyboy joined #gluster
09:42 deepakcs joined #gluster
09:45 hchiramm joined #gluster
09:46 gildub joined #gluster
10:05 Manikandan joined #gluster
10:20 Manikandan joined #gluster
10:28 skoduri joined #gluster
10:28 yazhini joined #gluster
10:38 nishanth joined #gluster
10:41 Philambdo joined #gluster
10:44 Raide joined #gluster
10:47 kkeithley1 joined #gluster
10:48 gem joined #gluster
10:56 Shridhar joined #gluster
10:57 Shridhar hi
10:57 glusterbot Shridhar: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:58 Shridhar Only few files are not healed to slave server
10:58 Shridhar can any one help on this.
10:59 DV__ joined #gluster
11:04 Bhaskarakiran joined #gluster
11:06 Shridhar left #gluster
11:06 Manikandan joined #gluster
11:12 hagarth joined #gluster
11:20 yazhini joined #gluster
11:21 Bhaskarakiran joined #gluster
11:24 Bhaskarakiran joined #gluster
11:24 gildub joined #gluster
11:31 DV__ joined #gluster
11:36 Raide joined #gluster
11:39 rjoseph joined #gluster
11:42 B21956 joined #gluster
11:57 Raide joined #gluster
12:08 Raide joined #gluster
12:18 rjoseph joined #gluster
12:25 Bhaskarakiran joined #gluster
12:25 haomaiwa_ joined #gluster
12:26 sakshi joined #gluster
12:27 chirino joined #gluster
12:28 spalai joined #gluster
12:30 Raide joined #gluster
12:31 spalai left #gluster
12:33 Raide joined #gluster
12:43 unclemarc joined #gluster
12:44 Raide joined #gluster
12:45 beeradb joined #gluster
12:50 julim joined #gluster
12:50 firemanxbr joined #gluster
12:54 B21956 joined #gluster
13:01 kdhananjay joined #gluster
13:01 haomaiwa_ joined #gluster
13:05 Raide joined #gluster
13:06 shyam joined #gluster
13:06 plarsen joined #gluster
13:11 EinstCrazy joined #gluster
13:17 dusmant joined #gluster
13:20 arcolife joined #gluster
13:22 bennyturns kr0w, I usually think of it as throughput = (1/2{for replica 2} * NIC BW) - 30 %  so on 10Gb NICs with a back end fast enough to service them I owuld expect ~400-500 MB / sec
13:22 bennyturns kr0w, if your back end won't service 40 Gb then in your case you will be bottlenecked by your storage
13:28 haomaiwa_ joined #gluster
13:28 harold joined #gluster
13:32 Leildin joined #gluster
13:36 Manikandan joined #gluster
13:39 Raide joined #gluster
13:40 ira joined #gluster
13:41 jiffin joined #gluster
13:41 mpietersen joined #gluster
13:42 Leildin joined #gluster
13:42 Leildin joined #gluster
13:43 Leildin joined #gluster
13:48 spcmastertim joined #gluster
13:49 haomaiwang joined #gluster
13:50 skylar joined #gluster
13:55 neha_ joined #gluster
13:56 arcolife joined #gluster
13:58 shubhendu joined #gluster
14:01 coredump joined #gluster
14:01 haomaiwa_ joined #gluster
14:05 kayn guys, is there anyone who could explain something weird during the heal process?
14:08 aaronott joined #gluster
14:09 kanagaraj joined #gluster
14:16 dusmant joined #gluster
14:16 beeradb joined #gluster
14:20 nishanth joined #gluster
14:23 gem joined #gluster
14:26 deni joined #gluster
14:27 deni I'm getting issues with the not being able to upload files to gluster if I don't chmod 777 /var/log/glusterfs -R
14:27 deni has anybody encountered this? I should note that I'm using the API (the python bindings to be more precise)
14:31 mpietersen joined #gluster
14:39 maserati joined #gluster
14:45 beeradb joined #gluster
14:46 shyam left #gluster
14:48 Leildin joined #gluster
14:55 haomaiwa_ joined #gluster
14:57 kanagaraj joined #gluster
15:00 thoht hi
15:00 glusterbot thoht: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:01 thoht if a node crashed, after recovering the configuration as well; should i recover the data myself with rsync ?
15:01 kayn joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 JoeJulian thoht: no
15:03 JoeJulian deni: You can: (1) pre-create the log file with the permissions for the user that's opening the log file; or (2) specify a different log file path before init.
15:04 JoeJulian Or 3, I think there's a way to log to syslog.
15:04 shyam joined #gluster
15:05 deni JoeJulian: I do pre-create the dir and do chmod 777 on it. But then the process starts (it starts as root) and creates files in it first). Then I chmod 777 recursively and then it works
15:05 deni JoeJulian: syslog is an issue because it won't work for everything (from what I read)...some things will still go to files
15:05 deni I'd much rather if everything would go to say stdout so that I can redirect it where it's needed...but I haven't found a way to do that
15:06 JoeJulian What if you change the logfile to /dev/stdout?
15:09 Leildin joined #gluster
15:09 deni JoeJulian: is there a global configuration I can do that in? Cause I can't find that option in the glusterd.vol file
15:10 JoeJulian It's done in the api calls.
15:11 bowhunter joined #gluster
15:11 ayma joined #gluster
15:12 deni so only as part of invoking the CLI or talking directly over the API
15:12 deni ?
15:12 deni (I imagine the cli talks through the API as well)
15:13 deni the documentation for this is very very sparse....either that or I can't find it. I want to be able to move all logs to stdout and not have anything log to /var/log/glusterfs
15:13 malevolent_ joined #gluster
15:13 malevolent__ joined #gluster
15:13 zhangjn joined #gluster
15:15 JoeJulian I've seen good documentation, but finding it is a bitch.
15:15 JoeJulian I'm trying to help.
15:16 zhangjn joined #gluster
15:17 deni JoeJulian: I appreciate the help. Cuase I'm really struggling with this....
15:17 JoeJulian https://github.com/avati/glusterfs/​blob/libgfapi/api/examples/gfapi.py
15:17 glusterbot Title: glusterfs/gfapi.py at libgfapi · avati/glusterfs · GitHub (at github.com)
15:17 JoeJulian There's at least that bit...
15:17 zhangjn joined #gluster
15:18 xavih joined #gluster
15:19 kr0w bennyturns: I don't know if you are still around. But thanks for the explination!
15:19 wushudoin joined #gluster
15:19 JoeJulian bennyturns++
15:19 glusterbot JoeJulian: bennyturns's karma is now 5
15:20 kr0w Nice
15:22 deni JoeJulian: I see. Tnx. I'll see what I can do.
15:27 JoeJulian Well that's lame. There's a /etc/sysconfig/glusterd but it's never used in the systemd service.
15:28 zhangjn joined #gluster
15:30 zhangjn joined #gluster
15:35 Simmo joined #gluster
15:36 Simmo Hi again : )
15:37 Simmo I'm afraid I need your help to debug my simple setup: 1 brick, 1 volume, 2 server. Not sure what I've missed, but writing in the mounted volume does not get replicated into the other machine :-/
15:39 JoeJulian ~pasteinfo | Simmo
15:39 glusterbot Simmo: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
15:40 stickyboy joined #gluster
15:40 Simmo http://ur1.ca/nxi2e
15:40 glusterbot Title: #275392 Fedora Project Pastebin (at ur1.ca)
15:40 Simmo thanks Jow :_)
15:40 Simmo * Joe
15:40 zhangjn joined #gluster
15:40 JoeJulian And you're writing to the mounted volume, not a brick, right?
15:41 Simmo I will pastebin the output of my df -h
15:41 JoeJulian The client can connect to both brick servers? No firewall?
15:41 Simmo (so, yes I'm writing in the mounted folder not the brick)
15:41 Simmo There's the default AWS EC2 firewall
15:42 Simmo I'll double check the firewall
15:42 Simmo and open the TCP/UDP connection
15:42 Simmo maybe I missed something there
15:42 JoeJulian @ports
15:42 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
15:42 JoeJulian "gluster volume status" will tell you what port the brick is *currently* listening on (that can change).
15:44 livelace joined #gluster
15:46 Simmo Firewall is super open now.
15:46 Simmo And here the output of my partition mounted: http://ur1.ca/nxi5v
15:46 glusterbot Title: #275398 Fedora Project Pastebin (at ur1.ca)
15:47 Simmo (I think I messed up with the mounting)
15:48 lkoranda_ joined #gluster
15:48 marbu joined #gluster
15:48 Simmo And the gluster volume status output
15:49 Simmo http://ur1.ca/nxi7a
15:49 glusterbot Title: #275401 Fedora Project Pastebin (at ur1.ca)
15:49 Simmo there are a couple N/A
15:49 Simmo :- ?
15:49 csaba joined #gluster
15:49 JoeJulian That's probably correct. Several of the services do not listen.
15:51 Simmo Uh, ok
15:53 mbukatov joined #gluster
15:55 Simmo One question: when I mount the volume, can I use the same IP address ("server" node), can't I ? E.g. mount -t glusterfs <same-ip>:/<volume-name> /<path>
15:55 JoeJulian @mount server
15:55 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
15:56 Simmo Thanks :) Actually, I remembered to have met this description
15:57 Simmo Which file log would be a good start ? E.g. /var/log/glusterfs/data-<volume name>.log ?
15:57 JoeJulian yes
15:57 Simmo I found an interesting line:
15:57 JoeJulian Check to see that it's actually connecting to both bricks. If it is, a write should be written to both synchronously.
15:58 lkoranda joined #gluster
15:59 Simmo Should I worry about this line from that log:
15:59 Simmo [2015-10-06 15:19:48.631320] I [client-handshake.c:1474:client_setvolume_cbk] 0-cr0-client-1: Server and Client lk-version numbers are not same, reopening the fds
15:59 glusterbot Simmo: This is normal behavior and can safely be ignored.
15:59 Simmo nice bot : )
15:59 _maserati_ joined #gluster
16:01 JoeJulian :) Yeah, when a very common question is easily answered by regex, I like to do that.
16:01 Simmo it makes sense! :D
16:02 Simmo So, for example, when on the "client" node I create an empty file, then
16:02 Simmo I can see on the same node's log
16:02 Simmo the following:
16:03 Simmo [2015-10-06 16:02:17.488979] W [fuse-bridge.c:1236:fuse_err_cbk] 0-glusterfs-fuse: 76: REMOVEXATTR() /test-joe.txt => -1 (No data available)
16:04 Simmo it does not sound good, does it ? :/
16:05 JoeJulian Meh, it might just be because the file does not exist yet.
16:05 JoeJulian It's just a " W "arning.
16:05 Simmo ah! ok
16:08 Simmo I cannot really understand where the problem is
16:08 Simmo also "gluster peer status"
16:08 Simmo output looks good
16:08 bluenemo joined #gluster
16:08 Simmo In a configuration with 2 server, then "peer status" should report "Number of Peer": 1
16:09 Simmo on each node
16:09 Simmo mentioning in the hostname
16:09 Simmo the other ip's node
16:09 Simmo right ?
16:16 ramky joined #gluster
16:18 JoeJulian right
16:19 JoeJulian Paste the client log
16:21 Simmo http://pastebin.com/PjMeQV04
16:21 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:21 Simmo ah so, sorry : )
16:21 Simmo http://fpaste.org/275422/
16:21 glusterbot Title: #275422 Fedora Project Pastebin (at fpaste.org)
16:22 kr0w left #gluster
16:22 Simmo I need to run now
16:22 Simmo hopefully I will more lucky tomorrow
16:27 arcolife joined #gluster
16:30 cholcombe joined #gluster
16:33 F2Knight joined #gluster
16:35 mhulsman joined #gluster
16:39 togdon joined #gluster
16:42 bdiehr I'm following the guide here: http://blog.celingest.com/en/​2013/03/21/glusterfs-in-aws/ on the last bash command, `chkconfig netfs on` I am having some issues.
16:43 bdiehr Sense i'm on Ubuntu i'm running it as `update-rc.d`. though I get the following:
16:43 bdiehr root@ip-10-0-0-127:/home/webs# update-rc.d netfs on
16:43 bdiehr update-rc.d: /etc/init.d/netfs: file does not exist
16:43 bdiehr it expects netfs to exist in /etc/init.d/ but it doesn't. Does anyone know how to address this?
16:49 ira joined #gluster
16:50 JoeJulian That guide is for an EL based distro like RHEL, Centos, Scientific Linux, etc. Ubuntu handles it's mounts differently.
16:54 mufa joined #gluster
16:58 bdiehr thanks Joe
16:58 bdiehr Do you think I should just scrap the entire server and start over?
16:59 JoeJulian I couldn't say. I would choose whatever distro you're most familiar with.
17:00 JoeJulian If you're not familiar with any, I personally am no help with ubuntu/debian, but I know any of the rpm based distros, and archlinux quite well.
17:00 JoeJulian Mostly, though, you should be able to just remove the "_netdev" from fstab and it *should* just work.
17:07 bdiehr thanks, i'll look more into how Ubuntu mounts work and then look into what you suggested
17:14 jwd joined #gluster
17:19 nishanth joined #gluster
17:41 AceFacee JoeJulian: I responded back to your suggestion in the mailing list, I believe my issue is not related to a split-brain situation.
17:42 AceFacee I am going to set up another cluster using CentOS7's built-in repo, rather than using the latest repo found on gluster.org
17:42 AceFacee and see if I have the same behavior
17:46 bdiehr joined #gluster
17:56 JoeJulian AceFacee: I believe you are correct. I would look in the client log to see what the error is. If there's no clue there, the brick logs.
17:58 mpietersen joined #gluster
18:00 mpietersen joined #gluster
18:02 mpietersen joined #gluster
18:08 Slashman joined #gluster
18:09 RameshN joined #gluster
18:13 bdiehr joined #gluster
18:31 JPaul joined #gluster
18:33 JPaul joined #gluster
18:46 ayma hi I'm trying to setup a HA ganesha-gluster cluster.  I am running gluster volume set all cluster.enable-shared-storage enable but it doesn't seem to be creating the /var/run/gluster/shared_storage/ on one the nodes in the replica.  I have a replica of 2
18:47 AceFacee do you have iptables or firewalld running on both nodes? if so, try shutting them off as a troubleshooting step
18:47 maveric_amitc_ joined #gluster
18:53 jobewan joined #gluster
18:53 ayma any ideas on how to debug...i've looked at the run-gluster-shared_storage.log and it says E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/gluster_shared_storage
18:53 ayma but I'm not sure what to look for next
18:55 ayma the gluster volume info returns saying that cluster.enable-shared-storage: enable
19:06 mufa how is HA ganesha-nfs performing in comparison with fuse client? Is it worth setting up ganesha nfs?
19:17 JoeJulian mufa: insufficient data to answer that question as it depends on your use case requirements.
19:20 JoeJulian ayma: that error message says it cannot find a volume with the name, "gluster_shared_storage" but I was under the impression it was supposed to be created automatically.
19:20 mufa JoeJulian: usual use case is 2 bricks in replica mode, one volume, some wordpress site(s) running on gluster
19:20 JoeJulian mufa: Do you need nfsv4+?
19:21 mufa no, nfs3 is fine, but I'd like having HA
19:21 JoeJulian Unless you use a pnfs client, you have the same ha either way. floating ip addresses.
19:22 Kelthar joined #gluster
19:22 mufa I don't get something here: I know HA is available by default with fuse, but not with normal NFS
19:23 JoeJulian Use ucarp or something similar and a floating IP.
19:23 mufa however, FUSE is slower than NFS due to lack of read caching
19:24 JoeJulian Your nfs connections go to that floating ip address.
19:25 mufa hmm, that sounds like an easy fix
19:25 Kelthar Is there a way to verify what failed during volume creation? Like, I know it's extended attributes since it's what it says, but on two exact server replicas, one gives an error, other doesn't
19:25 Kelthar (and this is freaking me out since I wiped both servers yesterday, executed exactly same commands on both, one works, other doesn't)
19:25 JoeJulian And actually, your use of fuse is slower due to lack of directory and stat caching and the fact that you try to open the same file 100 times for every page load.
19:26 mufa you are correct about lack of directory and stat caching, is there any solution?
19:26 JoeJulian Kelthar: Should be in the glusterd log (/var/log/glusterfs/etc-gl​usterfs-glusterd.vol.log) on the server that failed.
19:27 JoeJulian ,,(paste) it if you need another pair of eyes.
19:27 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
19:29 Pupeno joined #gluster
19:29 Kelthar Only tells me it failed on Volume Create, and that the error was setting extended attributes (and on glusterd_op_ac_stage_op, 0-management)
19:30 Kelthar Trying to figure out how should I go from here to pinpoint the issue which has been haunting me.
19:38 mhulsman joined #gluster
19:48 JoeJulian I'd need to actually look at the log to make a guess. Sometimes I have to read the source code to figure out what it's trying to do to make sense of why it's failing.
20:02 brian joined #gluster
20:03 Kelthar Exact error was: [glusterd-op-sm.c:5171:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Create', Status : -1
20:04 Kelthar Running latest 3.7.4, compiled from source
20:05 aaronott joined #gluster
20:06 bdiehr JoeJulian: I've given up on ubuntu and moved to REHL, though i'm having issues getting the glusterfs-server packager to be found even after adding the EPEL and enabling it:
20:06 bdiehr https://paste.fedoraproject.org/275504/41618941/
20:06 glusterbot Title: #275504 Fedora Project Pastebin (at paste.fedoraproject.org)
20:06 JoeJulian @latest
20:06 glusterbot JoeJulian: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
20:07 kkeithley_ If you're using RHEL you need to exclude glusterfs* from the RHEL channel(s).
20:07 bdiehr I'm a bit of a novice at this whole package manager ordeal, what would I look up to figure out how to use that?
20:09 Pupeno joined #gluster
20:11 bdiehr actually I got it figured out, thanks!
20:11 DV joined #gluster
20:12 JoeJulian Oh, good. I was trying to find a better walkthrough for you.
20:12 bdiehr i ended up just pasting that file into where the comment suggested and it worked
20:13 bdiehr i thought that installing EPEL made that step uncessary but I guess I was wrong
20:13 gem joined #gluster
20:14 JoeJulian Yeah, it can't be in epel because a (incompatible) version is shipped with RHEL.
20:20 aaronott joined #gluster
20:24 bdiehr makes sense
20:24 bdiehr I guess most of the documentation assumes a slightly better understanding of linux than I have
20:24 zhangjn_ joined #gluster
20:25 JoeJulian Yeah, it's common that a certain familiarity with linux is assumed before you start building complex clustered systems with it.
20:25 JoeJulian What is it you're trying to accomplish?
20:25 mufa JoeJulian: thank you for carp suggestion, however it looks a bit buggy on centos 7, it either does't move the ip when master goes down, or configure both nodes with VIP
20:26 JoeJulian mufa: odd, I've not had that problem.
20:26 mufa JoeJulian: it's possible I'm doing something wrong
20:28 JoeJulian I know gluster's leaning toward Pacemaker+Corosync but that seemed a bit overkill for what I'm using it for.
20:31 bdiehr JoeJulian: A client has a huge filesystem that is complicated mesh of files that we need to the whole thing to run computations on data in an ondemand fashion
20:31 bdiehr We've gone through several DevOps freelancers but none of them seem to know enough to get it working, or don't understand the requirements
20:32 bdiehr normally I stick to software development but i'm getting my feet wet trying to get it working myself
20:32 JoeJulian Oh, yeah, that's a pretty common use case.
20:32 bdiehr and i've always been interested in learning this stuff, so it's a great excuse :)
20:33 bowhunter joined #gluster
20:33 bdiehr okay phew, was concerned we were going about this a wrong way
20:33 JoeJulian Well, plug away. I'm trying to make some headway at $dayjob today despite being asked to write up new system design proposal so I'll be in and out.
20:34 mufa yeah, I am aware of Pacemake+Corosync but I'd like to avoid killing a fly with a cannon
20:34 bdiehr enjoy, and thanks for all the help!
20:40 wolsen joined #gluster
20:40 PaulCuzner joined #gluster
20:44 maserati joined #gluster
20:45 armyriad joined #gluster
20:45 k-ma joined #gluster
20:45 ccha joined #gluster
20:46 B21956 joined #gluster
20:46 hchiramm joined #gluster
20:46 jockek joined #gluster
20:46 anoopcs joined #gluster
20:46 campee joined #gluster
20:46 msvbhat joined #gluster
20:46 squaly joined #gluster
20:46 the-me joined #gluster
20:46 crashmag joined #gluster
20:46 frostyfrog joined #gluster
20:46 xMopxShell joined #gluster
20:46 rich0dify joined #gluster
20:46 JonathanD joined #gluster
20:46 purpleidea joined #gluster
20:46 ws2k3 joined #gluster
20:46 lanning joined #gluster
20:46 VeggieMeat joined #gluster
20:46 sankarshan_away joined #gluster
20:46 jermudgeon joined #gluster
20:46 virusuy joined #gluster
20:46 yawkat joined #gluster
20:46 al joined #gluster
20:46 codex joined #gluster
20:47 VeggieMeat joined #gluster
20:47 xMopxShell joined #gluster
20:48 gildub joined #gluster
20:49 atalur joined #gluster
20:51 rideh joined #gluster
20:52 sadbox joined #gluster
20:55 Rapture joined #gluster
20:56 Rapture joined #gluster
20:56 l0uis joined #gluster
20:57 twisted` joined #gluster
20:57 Rapture joined #gluster
21:07 ayma when running ha-ganesha with gluster are both replicas supposed to be running ganesha or just one?
21:08 aaronott joined #gluster
21:09 JoeJulian ayma: depends. If you want to be able to connect to ganesha on both, then both.
21:11 ayma hmm so I'm trying to follow the steps in this link:
21:11 ayma https://gluster.readthedocs.org/en/latest/Admini​strator%20Guide/Configuring%20HA%20NFS%20Server/
21:11 glusterbot Title: Configuring HA NFS server - Gluster Docs (at gluster.readthedocs.org)
21:12 ayma along with the slides here: http://www.slideshare.net/​SoumyaKoduri/high-49117846 from this youtube video:https://www.youtube.com/watch?v=Z4mvTQC-efM
21:12 glusterbot Title: Highly Available Active-Active NFS server on GlusterFS Storage system (at www.slideshare.net)
21:12 JoeJulian ayma: are you using corosync+pacemaker?
21:13 ayma i manually created the gluster_shared_storage gluster volume and mounted it on /var/run/gluster/shared_storage
21:13 JoeJulian Because that guide requires those.
21:13 ayma yep
21:13 ayma have corosync + pacemaker installed
21:14 JoeJulian I got as far as finding out I had to use those and then stopped following that page and just did it by hand using ucarp.
21:14 ayma used this link to setup: http://clusterlabs.org/doc/en-US/Pacemaker/1.1/htm​l/Clusters_from_Scratch/_configuring_corosync.html
21:14 glusterbot Title: 2.5.3. Configuring Corosync (at clusterlabs.org)
21:14 ayma what is ucarp?
21:15 JoeJulian lightweight floating ip address tool.
21:15 JoeJulian @lucky ucarp
21:15 glusterbot JoeJulian: http://www.pureftpd.org/project/ucarp
21:16 ayma okay
21:16 ayma any instructions on how to configure by hand using ucarp?
21:17 ayma does ucarp scale out well?
21:17 skylar joined #gluster
21:18 JoeJulian Not sure. I'm only ever going to be using it with 2 servers. My next install will have pnfs clients so I won't have to worry about that one.
21:18 ayma but i guess my real issue is that i'm still struggling with the "gluster nfs-ganesha enable" cmd where it seems like things have gotten a bit confused
21:18 JoeJulian check the glusterd logs.
21:18 ayma now one node in the replica is showing nfs-ganesha enable and the other is not
21:18 JoeJulian eww
21:19 JoeJulian If you can figure out how to get back in that state, please file a bug report.
21:19 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
21:20 ayma okay will do thanks
21:21 ayma the logs just say that gluster nfs-ganesha enable is failing because E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on deploy. Error: nfs-ganesha is already enabled.
21:21 ayma i'll see if I can get of this state and then open a bug
21:23 JoeJulian You can probably get out of that state by stopping both glusterd then looking through /var/lib/glusterd for the option in a configuration file.
21:29 Pupeno joined #gluster
21:38 stickyboy joined #gluster
21:38 Pupeno joined #gluster
22:02 javi404 joined #gluster
22:04 aaronott joined #gluster
22:10 edwardm61 joined #gluster
22:13 badone joined #gluster
22:14 aaronott joined #gluster
22:24 togdon joined #gluster
22:32 skoduri joined #gluster
22:34 fyxim joined #gluster
22:34 csim joined #gluster
22:40 samsaffron___ joined #gluster
22:43 bdiehr joined #gluster
22:45 billputer joined #gluster
22:53 aaronott joined #gluster
23:02 luis_silva joined #gluster
23:21 twisted` joined #gluster
23:28 plarsen joined #gluster
23:52 zhangjn joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary