Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 Mr_Psmith joined #gluster
00:09 julim joined #gluster
00:10 ndk joined #gluster
00:19 shyam joined #gluster
00:22 daMaestro joined #gluster
00:57 vimal joined #gluster
01:21 lcurtis joined #gluster
01:31 Lee1092 joined #gluster
01:37 nangthang joined #gluster
01:53 harish_ joined #gluster
02:22 maveric_amitc_ joined #gluster
02:23 msciciel joined #gluster
02:23 nangthang joined #gluster
02:46 kshlm joined #gluster
02:47 bharata-rao joined #gluster
02:50 kovshenin joined #gluster
02:51 kovsheni_ joined #gluster
02:59 maveric_amitc_ joined #gluster
03:03 julim joined #gluster
03:14 maveric_amitc_ joined #gluster
03:36 nishanth joined #gluster
03:38 kovshenin joined #gluster
03:39 [7] joined #gluster
03:39 stickyboy joined #gluster
03:44 ramteid joined #gluster
03:52 haomaiwa_ joined #gluster
03:58 nbalacha joined #gluster
03:59 neha_ joined #gluster
04:00 kotreshhr joined #gluster
04:01 haomaiwa_ joined #gluster
04:06 vmallika joined #gluster
04:10 kdhananjay joined #gluster
04:21 yazhini joined #gluster
04:22 gem joined #gluster
04:26 shubhendu joined #gluster
04:37 RameshN joined #gluster
04:41 shubhendu joined #gluster
04:54 sakshi joined #gluster
04:54 pppp joined #gluster
05:01 haomaiwa_ joined #gluster
05:06 neha_ joined #gluster
05:08 ramteid joined #gluster
05:16 ndarshan joined #gluster
05:20 F2Knight joined #gluster
05:25 Bhaskarakiran joined #gluster
05:33 jiffin joined #gluster
05:35 ashiq joined #gluster
05:35 Bhaskarakiran joined #gluster
05:36 jwaibel joined #gluster
05:37 kshlm joined #gluster
05:40 karnan joined #gluster
05:40 HemanthaSKota joined #gluster
05:42 ppai joined #gluster
05:48 poornimag joined #gluster
05:51 skoduri joined #gluster
05:51 vmallika joined #gluster
05:53 kanagaraj joined #gluster
05:54 dusmant joined #gluster
05:57 rafi joined #gluster
06:00 moogyver joined #gluster
06:01 haomaiwa_ joined #gluster
06:06 R0ok_ joined #gluster
06:07 shubhendu joined #gluster
06:10 neha_ joined #gluster
06:10 Manikandan joined #gluster
06:10 ramky joined #gluster
06:11 deepakcs joined #gluster
06:12 ashiq joined #gluster
06:12 hgowtham joined #gluster
06:15 atalur joined #gluster
06:24 shubhendu joined #gluster
06:30 jtux joined #gluster
06:30 mhulsman joined #gluster
06:33 anil joined #gluster
06:38 rjoseph joined #gluster
06:39 hchiramm joined #gluster
06:41 deepakcs joined #gluster
06:47 LebedevRI joined #gluster
06:50 spalai joined #gluster
06:55 DV joined #gluster
06:56 nangthang joined #gluster
06:57 jvandewege joined #gluster
07:00 Manikandan joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 [Enrico] joined #gluster
07:03 [Enrico] joined #gluster
07:10 hagarth joined #gluster
07:14 [Enrico] joined #gluster
07:31 hagarth joined #gluster
07:34 fsimonce joined #gluster
07:37 Manikandan joined #gluster
07:39 mufa joined #gluster
07:40 Akee joined #gluster
07:43 DV joined #gluster
07:58 prg3 joined #gluster
07:58 hagarth joined #gluster
08:01 Raide joined #gluster
08:01 haomaiwa_ joined #gluster
08:13 Akee joined #gluster
08:15 ctria joined #gluster
08:18 arcolife joined #gluster
08:22 muneerse2 joined #gluster
08:23 DV joined #gluster
08:27 Slashman joined #gluster
08:35 mbukatov joined #gluster
08:37 jamesc joined #gluster
08:51 Raide joined #gluster
08:57 Saravana_ joined #gluster
09:01 haomaiwa_ joined #gluster
09:05 pg joined #gluster
09:06 hagarth joined #gluster
09:06 rajeshj joined #gluster
09:17 dusmant joined #gluster
09:26 pg joined #gluster
09:38 kovsheni_ joined #gluster
09:39 ppai joined #gluster
09:40 stickyboy joined #gluster
09:50 Raide joined #gluster
09:55 bluenemo joined #gluster
10:00 haomaiwa_ joined #gluster
10:01 haomaiwa_ joined #gluster
10:03 morse joined #gluster
10:16 ndarshan joined #gluster
10:25 ndarshan joined #gluster
10:26 shubhendu joined #gluster
10:27 nishanth joined #gluster
10:37 ppai joined #gluster
10:48 ccoffey joined #gluster
10:48 sc0 joined #gluster
10:52 Manikandan joined #gluster
10:58 nbalacha joined #gluster
11:01 haomaiwa_ joined #gluster
11:05 haomaiwang joined #gluster
11:40 bfoster joined #gluster
11:42 ppai joined #gluster
11:42 rjoseph joined #gluster
11:43 RedW joined #gluster
11:51 Trefex joined #gluster
11:55 Philambdo joined #gluster
12:17 rjoseph joined #gluster
12:18 ndarshan joined #gluster
12:20 unclemarc joined #gluster
12:20 nishanth joined #gluster
12:30 B21956 joined #gluster
12:33 spalai left #gluster
12:34 firemanxbr joined #gluster
12:43 Philambdo joined #gluster
12:44 Mr_Psmith joined #gluster
12:46 arcolife joined #gluster
12:48 hagarth joined #gluster
12:54 dlambrig joined #gluster
12:54 shubhendu joined #gluster
12:55 Pupeno joined #gluster
12:56 julim joined #gluster
13:03 Simmo joined #gluster
13:04 Simmo Hi All : )
13:05 Simmo I'm trying to understand the naming conventions for bricks, dirs, partions
13:05 Simmo * partitions
13:05 Simmo etc
13:05 Simmo But having some difficulties..
13:06 Simmo In my example, I have an application called "Cloud Recognition". For its business Cloud Recognition uses (also) two directories containing several files.
13:07 Simmo Those directories are called "cloudarchive" and "targetscollection".
13:07 Simmo And those are the directories I would like to keep in sync on different nodes.
13:07 Simmo How would you name the bricks ?
13:09 julim_ joined #gluster
13:09 Simmo : )
13:12 clutchk joined #gluster
13:15 ayma joined #gluster
13:16 dgandhi joined #gluster
13:18 Manikandan joined #gluster
13:19 kkeithley 'brick' is just a nickname for server+path. If you were going to export a directory from an NFS server, what would you name the directory? Eggs-and-Bacon maybe?  Then for a replica-2 gluster system I'd create a directory on each server called /bricks/Eggs-and-Bacon.  I'd mount a volume (e.g. /dev/sdc1) at /bricks/Eggs-and-Bacon on each server.  Then I'd create my gluster replica volume accordingly.  E.g. `gluster volume create replica 2
13:23 kkeithley N.B. we usually suggest that you use a subdir, not the top-level dir of the brick file system.  IOW `mkdir /bricks/Eggs-and-Bacon/volume` on each server. Then `gluster volume create replica 2 $server1:/bricks/Eggs-and-Bacon/volume $server2:/bricks/Eggs-and-Bacon/volume`
13:24 shyam joined #gluster
13:26 luis_silva joined #gluster
13:29 neha_ joined #gluster
13:31 Simmo Let me try : )
13:31 nbalacha joined #gluster
13:41 Simmo I think I'll simply have /bricks/cloudrecognition/volume
13:42 Simmo then I'll mount it in whatever dir (on this step mount -t glusterfs $storage_server:/$volume /mnt)
13:42 Simmo and in the /mnt I can create my dirs "cloudarchive" and "targetscollection"
13:46 harold joined #gluster
13:48 harish_ joined #gluster
13:49 Simmo I think I understood (another) mistake: I could/should partition the hard disk so that I can create different gluster volumes in a more regular 1-to-1 mapping..
13:51 dlambrig left #gluster
13:54 haomaiwa_ joined #gluster
13:57 mpietersen joined #gluster
13:58 shubhendu joined #gluster
14:01 vmallika joined #gluster
14:01 haomaiwang joined #gluster
14:02 thoht joined #gluster
14:02 thoht hi
14:02 glusterbot thoht: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:02 kdhananjay joined #gluster
14:03 thoht i d like to know if this is possible to use glusterfs to replicate a volume between 2 servers not in same DC (meaning using public ip)
14:07 Simmo Hi thoht.. I think you need to refer to "Geo Replication" setup
14:07 Simmo https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Geo%20Replication/
14:07 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.org)
14:07 thoht Simmo: for geo replication, don't i need an exiting gluster replication between 2 nodes on same network already in place ?
14:08 Simmo pls, keep my advice with doubt because I'm new too : )
14:08 Simmo have no idea : )
14:09 patryck Simmo: http://knowyourmeme.com/memes/i-have-no-idea-what-im-doing
14:09 glusterbot Title: I Have No Idea What I'm Doing | Know Your Meme (at knowyourmeme.com)
14:09 dlambrig joined #gluster
14:10 Simmo :-/
14:10 dlambrig left #gluster
14:10 mreamy joined #gluster
14:12 kkeithley I suggest you read the doc that Simmo mentioned.  Yes, you need a gluster server in your local DC and the remote DC
14:13 thoht kkeithley: yes i m at this point, i got 2 devices; i installed glusterfs-geo-replication 3.7.4
14:13 thoht but there is a prerequisite to have an existing volume
14:17 Lee1092 joined #gluster
14:19 lcurtis joined #gluster
14:22 maserati joined #gluster
14:25 Philambdo joined #gluster
14:26 lcurtis joined #gluster
14:33 bowhunter joined #gluster
14:35 thoht kkeithley: ok i created volume on both node; but when i start now the replication it failed
14:36 thoht Unable to store slave volume name.
14:36 thoht this is a weird error
14:37 asrivast joined #gluster
14:39 JoeJulian ndevos: If you're still around and could review and comment on https://botbot.me/freenode/gluster/2015-10-02/?msg=51043955&page=3 last Friday, I'd appreciate any input.
14:39 glusterbot Title: IRC Logs for #gluster | BotBot.me [o__o] (at botbot.me)
14:45 rafi ndevos: ping
14:45 glusterbot rafi: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
14:46 rejy joined #gluster
14:46 DV joined #gluster
14:50 rjoseph joined #gluster
14:50 pppp joined #gluster
14:51 dlambrig_ joined #gluster
14:53 ro_ joined #gluster
14:57 skylar joined #gluster
14:57 gem joined #gluster
14:59 nbalacha joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 ccoffey I had two servers. gluster was turned off on one and the brick wiped. The data was rsync'd(regular rsync) back.  I'd like to trun gluster back on on the 2nd node and enable self heal. Is this wise?
15:03 thoht any way to fix thie following error when doing "glsuter peer status" : peer status: failed
15:03 thoht daemon is runnign
15:04 Mr_Psmith joined #gluster
15:05 JoeJulian ccoffey: no
15:06 jamesc joined #gluster
15:06 JoeJulian restarting all glusterd should work, and is harmless.
15:06 jamesc does anyone know how to turn nfs.log logging off!
15:06 wushudoin joined #gluster
15:07 JoeJulian ccoffey: You've written files to a brick without any of the correct extended attribute metadata, the most important of which is the gfid. This will cause mismatches and split-brain.
15:08 thoht it doesn t
15:08 ccoffey @joejilian. Thought that might be the case. For my own reference, how do I view the extended attrivbutes on a file?
15:09 JoeJulian jamesc: ln -s /dev/null /var/log/glusterfs/nfs.log maybe?
15:09 JoeJulian @extended attributes
15:09 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
15:09 jamesc JoeJulian: Thanks seems a bit drastic though
15:10 JoeJulian jamesc: How many servers?
15:10 ccoffey @joejulian, thanks
15:10 JoeJulian jamesc: You could also check the glusterd logs to see why it failed.
15:10 jamesc two
15:10 JoeJulian That's not really all that drastic then.
15:10 JoeJulian If you had 100, sure.
15:11 ro_ hey guys - when I create a gluster volume and start it, the bricks start fine and stay online, the self-heal daemons all appear to be running, but all of the NFS servers across the cluster fail to start. Any tips on what I should be looking into?
15:12 JoeJulian @nfs
15:12 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
15:15 asrivast joined #gluster
15:26 jwd joined #gluster
15:28 thoht gluster peer probe gluster1 <== command is hanging then i got a timeout; previously it was workign: what could be the issue ?
15:28 mufa Is there a way to get nfs-ganesha-gluster installed via rpm on a centos 7 box?
15:31 JoeJulian mufa: isn't nfs-ganesha in epel?
15:31 bdiehr joined #gluster
15:32 mufa It is, but https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/ requires both nfs-ganesha and nfs-ganesh-gluster to be installed
15:32 glusterbot Title: Configuring HA NFS server - Gluster Docs (at gluster.readthedocs.org)
15:32 JoeJulian thoht: firewall?
15:32 bdiehr Hi all! I'm new to using gluster - I was wondering if anyone has successfully used gluster in junction with Amazon ECS (EC2 Container Service)
15:34 JoeJulian mufa: http://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/2.2.0/
15:34 glusterbot Title: Index of /pub/gluster/glusterfs/nfs-ganesha/2.2.0 (at download.gluster.org)
15:34 JoeJulian bdiehr: yep, lots of people.
15:35 mufa JoeJulian: thanks
15:36 thoht any way to fix that : gluster pool list returns pool list: failed (daemo is running as well; can also see my volume)
15:37 bdiehr JoeJulian: That's encouraging, I was looking for resources specific to ECS though I'm having issues finding anything. Am I right think,ing I can just follow along with an EC2 guide and use those instances as my ECS cluster instances?
15:37 JoeJulian semiosis: ^
15:38 jwd joined #gluster
15:38 JoeJulian @lucky semiosis ec2 guide
15:38 glusterbot JoeJulian: https://aws.amazon.com/documentation/ec2/
15:38 JoeJulian meh, not lucky today.
15:39 stickyboy joined #gluster
15:42 thoht jermudgeon: if it was fw; should nt i see more info ?
15:42 bdiehr JoeJulian: Trying to search for the term 'semiosis ec2 guide' / ' Louis Zuckerman gluster ec2' without much luck
15:43 spcmastertim joined #gluster
15:45 cholcombe joined #gluster
15:45 thoht i added a peer: peer probe: success. Host gluster1 port 24007 already in peer list
15:45 thoht and still have issue when doing gluster pool list
15:45 thoht got the msg : pool list: failed
15:47 Bhaskarakiran joined #gluster
15:47 thoht and log is shooting in loop: W [socket.c:869:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 7, Invalid argument
15:47 thoht 0-management: Lock for vol ovirt not held
15:51 haomaiwa_ joined #gluster
15:51 muneerse joined #gluster
15:56 pppp joined #gluster
16:00 moogyver joined #gluster
16:01 haomaiwa_ joined #gluster
16:06 jamesc joined #gluster
16:08 gem joined #gluster
16:15 unicky joined #gluster
16:18 rejy joined #gluster
16:20 xMopxShell i've been testing gluster between a few machines, and I ran into an issue - More often then not, when I mount the gluster filesystem, i can only see files from 2 of my 3 test nodes
16:21 mufa left #gluster
16:21 xMopxShell The missing files are usually ones on the same node that i'm mounting the gluster volume from, but not always
16:21 xMopxShell Any ideas?
16:22 xMopxShell I'm using distributed, no replication
16:22 shubhendu joined #gluster
16:23 thoht xMopxShell: check the log
16:23 xMopxShell i havent checked that but if i check peer status before mounting they're all there
16:24 xMopxShell and if i remount it, all the files are there
16:26 jwaibel joined #gluster
16:28 jiffin joined #gluster
16:31 F2Knight joined #gluster
16:32 asrivast joined #gluster
16:40 F2Knight_ joined #gluster
16:45 haomai___ joined #gluster
16:51 aaronott joined #gluster
17:01 haomaiwang joined #gluster
17:12 jwd joined #gluster
17:12 F2Knight joined #gluster
17:30 kayn joined #gluster
17:36 kayn hi guys, I would like to ask what is the best practice to restore failed node in replica?
17:36 kayn I reached to the weird state when I rebootstrapped the machine again by puppet what got me to the state when both servers in replica were peered but gluster servers didn't communicate.
17:36 kayn So I stopped gluster, fixed extended file attributes, rsynced data from health node with following params "-aAXv --numeric-ids --progress --human-readable --exclude=.glusterfs --relative" and started gluster.
17:36 kayn Then everything looked fine
17:38 kayn s/$/but the data were accessible only with full path. When I tried to do "ls" in half of directories, no ouput came.
17:39 Bhaskarakiran_ joined #gluster
17:48 asrivast joined #gluster
17:50 Bhaskarakiran joined #gluster
17:53 Pupeno_ joined #gluster
17:53 Philambdo joined #gluster
17:55 mhulsman joined #gluster
17:58 unicky joined #gluster
18:01 B21956 joined #gluster
18:05 unicky joined #gluster
18:16 unicky joined #gluster
18:16 mufa joined #gluster
18:18 shaunm joined #gluster
18:32 rafi joined #gluster
18:35 unicky joined #gluster
18:41 firemanxbr joined #gluster
18:44 p8952 joined #gluster
18:45 afics joined #gluster
18:47 luis_silva Hello all, One of my gluster nodes is showing IP instead a resolving hostname. I checked my DNS server and everything looks good. Is there a way to fix this or is this even an issue?
18:48 diegows joined #gluster
18:50 asrivast joined #gluster
18:50 Rapture joined #gluster
18:51 Rapture joined #gluster
19:01 ayma joined #gluster
19:02 mhulsman joined #gluster
19:04 JoeJulian probe it by name from another peer.
19:08 asrivast joined #gluster
19:10 livelace joined #gluster
19:13 shubhendu joined #gluster
19:14 wolsen joined #gluster
19:16 JoeJulian xMopxShell: if you didn't mess with the xattrs on the good source brick, I'd wipe the new one and let self-heal do the copying. Otherwise, if you did screw up the good source and both bricks are now identical, I'd wipe the right-hand brick (as in gluster volume create $vol replica 2 lefthand:/brick righthand:/brick) and then do a heal...full.
19:17 JoeJulian s/xMopxShell/kayn/
19:17 glusterbot What JoeJulian meant to say was: kayn: if you didn't mess with the xattrs on the good source brick, I'd wipe the new one and let self-heal do the copying. Otherwise, if you did screw up the good source and both bricks are now identical, I'd wipe the right-hand brick (as in gluster volume create $vol replica 2 lefthand:/brick righthand:/brick) and then do a heal...full.
19:28 aaronott I'm looking for a good reference for performance tuning (v3.5.6). I currently have a 2 node replicated cluster and am seeing a speed of 58Mb/s when testing through the client using dd. I must apologize first, I'm not as familiar with Gluster as I'd like to be.
19:32 Philambdo joined #gluster
19:34 thoht i created a brick replica volume between 2 nodes; to write in this volume from these nodes itself; is it a mandatory to perform a mount -t gluster localhost/volume /path ?
19:36 dlambrig joined #gluster
19:42 JoeJulian aaronott: How does dd compare with your expected use case?
19:44 JoeJulian thoht: You must use the *volume* through a client mount. A brick is used by glusterfs to create the volume. If I'm understanding your question correctly, you'll want to mount "localhost:myvol /mnt/myvol" where your brick is *not* on /mnt/myvol.
19:48 thoht JoeJulian: that's correct, i mounted localhost:/brick /mnt to write in /mnt
19:54 JoeJulian no, not "brick" "volume"
19:54 JoeJulian unless you named your volume "brick"
19:56 JoeJulian "gluster volume create myvol replica 2 server{1,2):/srv/brick1" would create a volume named myvol. You would mount that with "mount -t glusterfs server1:myvol /mnt"
19:57 aaronott JoeJulian in the expected usecase there are a large number of files written some small, some large. These files are then read and used by other applications
19:57 JoeJulian So that sounds like a no.
19:57 aaronott so my using dd is just testing a single file write
19:58 aaronott JoeJulian ?
19:58 JoeJulian Tuning is done to optimize for a specific use case, optimizing for one single use at the expense of others. Yours sounds rather general in which case the defaults are usually best.
19:59 JoeJulian So, going back to 58Mb/s: are you using sufficiently large block sized to fill your tcp packets?
20:00 aaronott dd if=/dev/zero of=test.img bs=256M count=4
20:01 JoeJulian And is that megabits or megabytes? If it's megabytes, that's about half a gigabit which, if you're doing replica 2, suggests you may be filling a gigabit connection.
20:01 JoeJulian If it's truly Mb then I'm at a loss.
20:02 aaronott I have a 10G connection between the nodes as well
20:02 JoeJulian Check iperf to make sure it's not broken.
20:03 aaronott iperf shows I can get 7.1Gbits/Second between the pair
20:03 JoeJulian hmm
20:04 JoeJulian Only one network path?
20:04 JoeJulian ie, could the server hostnames be resolving to another network connector or a routed connection?
20:05 aaronott hmmm…  I don't think so but I'm not sure how I'd verify that
20:10 aaronott hostnames are in the /etc/hosts file on these pointed directly at the ip
20:11 JoeJulian "gluster volume info $vol" will show you the hostnames ( or ip addresses if you did it wrong ;) ). Ping the hostnames from the client to check they're resolving as you expect. Use "ip route get" to check the route if there's a chance to use a wrong one.
20:11 JoeJulian Be aware of the potential confusion regarding the ,,(mount server)
20:11 glusterbot (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
20:12 JoeJulian I've got to see if #2 is still true in 3.7.
20:13 _Bryan_ joined #gluster
20:14 DV joined #gluster
20:16 aaronott Thanks JoeJulian.  looks like the client is able to ping and resolve correctly, "ip route get" looks like there is a single route to each as well
20:18 JoeJulian I'm guessing your clients are also your servers?
20:18 lcurtis_ joined #gluster
20:19 aaronott the servers are clients but there are other clients as well
20:20 aaronott testing directly on the server client…  the speed is better leading me to believe there is an issue on one of the clients I'm testing… but I'm getting about 200Mb/s there
20:20 aaronott is that an expected throughput?
20:21 aaronott testing outside of gluster (understandably not the same) I'm getting ~ 2.0Gb/s
20:22 JoeJulian 1.6Gbps, not quite what I would hope for, no. You should be able to fill your network.
20:23 JoeJulian Unless you're doing replica 6.
20:23 aaronott heh… nope, just 2
20:26 _maserati_ joined #gluster
20:31 JoeJulian Someday I need to detail the cost and maximum throughput of context swaps in relation to cpu and cache speeds.
20:32 _maserati_ joined #gluster
20:32 JoeJulian Occasionally I see people in here with these issues and I've never had a satisfactory diagnosis.
20:37 aaronott Hmmm…  that's a good point https://pastee.org/7up5c seems like context switching is getting up there
20:37 glusterbot Title: Paste: 7up5c (at pastee.org)
20:40 bdiehr Does anyone here have experience installing glusterfs-server on RHEL?
20:40 bdiehr http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-2015.03/noarch/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
20:40 bdiehr Trying other mirror.
20:40 bdiehr No package glusterfs-server available.
20:40 bdiehr Error: Nothing to do
20:40 bdiehr that's what i'm getting when I follow the instructions here: http://www.gluster.org/community/documentation/index.php/Getting_started_install
20:40 bdiehr under the 'For Red Hat/CentOS' section
20:44 JoeJulian bdiehr: releasever = 2015.03 ?
20:46 bdiehr I'm unsure what that question means?
20:47 JoeJulian Ah, you're using an amazon linux ami it seems.
20:48 JoeJulian Looks like someone has added, "releasever=2015.03" in /etc/yum.conf
20:48 bdiehr yes
20:49 bdiehr [ec2-user@ip-10-0-0-132 ~]$ cat /etc/os-release
20:49 bdiehr NAME="Amazon Linux AMI"
20:49 bdiehr VERSION="2015.03"
20:49 bdiehr ID="amzn"
20:49 bdiehr ID_LIKE="rhel fedora"
20:49 bdiehr VERSION_ID="2015.03"
20:49 bdiehr PRETTY_NAME="Amazon Linux AMI 2015.03"
20:49 JoeJulian Since that's non-standard, it messes up the path expansion.
20:49 bdiehr ANSI_COLOR="0;33"
20:49 bdiehr CPE_NAME="cpe:/o:amazon:linux:2015.03:ga"
20:49 bdiehr HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
20:49 JoeJulian @paste
20:49 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
20:50 JoeJulian We should probably have symlinks to support that, kkeithley ?
20:51 JoeJulian bdiehr: For now, edit the .repo file and replace "$releasever" with "7"
20:51 bdiehr thanks, i'll try that
20:51 skoduri joined #gluster
20:55 bdiehr awesome that solution worked, thanks @JoeJulian
20:56 bdiehr --> Finished Dependency Resolution
20:56 bdiehr Error: Package: glusterfs-server-3.7.4-2.el7.x86_64 (glusterfs-epel)
20:56 bdiehr Requires: liburcu-bp.so.1()(64bit)
20:56 bdiehr Error: Package: glusterfs-libs-3.7.4-2.el7.x86_64 (glusterfs-epel)
20:56 bdiehr Requires: rsyslog-mmjsonparse
20:56 bdiehr Error: Package: glusterfs-server-3.7.4-2.el7.x86_64 (glusterfs-epel)
20:56 bdiehr Requires: systemd-units
20:56 bdiehr Error: Package: glusterfs-server-3.7.4-2.el7.x86_64 (glusterfs-epel)
20:56 bdiehr Requires: liburcu-cds.so.1()(64bit)
20:56 bdiehr unsure if these will be a problem
20:58 rwheeler joined #gluster
20:58 JoeJulian Do you have the epel repo installed?
20:58 JoeJulian Also, please stop posting half a screen of text into the channel. Use a pastebin if it's more than 3 lines.
21:00 bdiehr Sorry about that, will do
21:01 cholcombe joined #gluster
21:03 JoeJulian aaronott: Not sure if anything on this page can help, but I've been wanting to find the time to try some of it for the problem you might be hitting: http://rhelblog.redhat.com/2015/09/29/pushing-the-limits-of-kernel-networking/
21:04 aaronott Thanks JoeJulian, I'll look into these
21:05 bdiehr JoeJulian: http://pastebin.com/Lw7qLPza
21:05 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:05 mufa joined #gluster
21:11 JoeJulian Looks like, at the very least, rsyslog-mmjsonparse should be in the normal distro. I'm going to have to refer you to Amazon support since it's their distro that's missing those packages.
21:26 bdiehr Will do - thanks
21:30 dlambrig_ joined #gluster
21:32 suliba joined #gluster
21:39 stickyboy joined #gluster
21:42 mufa I'm trying to setup HA NFS with nfs-ganesha on 2 servers running centos7 and I don't understand what is the virtual_ip the documentation refers at. Is that a floating IP, or what should I use?
21:57 dgbaley joined #gluster
22:23 Bhaskarakiran joined #gluster
22:31 CyrilPeponnet you, I have a gluster node stuck at 2400% CPU, volume are slow. I tried to restart glistered, even kill glusterfsd but each time I restart it same thing.
22:31 CyrilPeponnet how to find out what is going on
22:46 JoeJulian I'd probably start at the brick log, maybe the self-heal log(s)
22:46 JoeJulian This is where having aggregated logs (ie elk) is really valuable.
22:48 jobewan joined #gluster
23:01 marcoceppi joined #gluster
23:12 CyrilPeponnet the issue is that gluster vol heal myvol info is taking like forever
23:17 CyrilPeponnet bricks are not showing anything anormal
23:24 marcoceppi joined #gluster
23:40 kr0w joined #gluster
23:40 kr0w Anyone here that can help me with an op-version mismatch?
23:41 CyrilPeponnet it usually means that a client is using another version of gluster and mounting your volume using gluster-fuse
23:41 kr0w Indeed
23:41 kr0w Proxmox is the client and it is trying to use fuse
23:42 kr0w It is a bit older. How can I get them to play nice?
23:42 CyrilPeponnet so make sure to align version and also community vs peel (not the same thing)
23:42 kr0w The client is older than the server I mean.
23:42 CyrilPeponnet update the client :)
23:42 CyrilPeponnet actually if an old client is connected you cannot change settings until this client is gone...
23:42 kr0w Ah, so I need to find a debian based package that won't mess with proxmox base..
23:43 kr0w No client is connected, I just attempted and it failed with a message about the op-version being mismatched
23:43 CyrilPeponnet I had the something but with 1k+ clients... I had to find which one was the culprit.
23:43 kr0w That would not have been fun
23:43 CyrilPeponnet you don't have any client ?
23:44 kr0w I just barely created the volume
23:44 CyrilPeponnet how many nodes
23:45 kr0w Although I did connect with the server as a client. I unmounted so that one should be gone... right?
23:45 CyrilPeponnet are all node using the same version / distro
23:45 kr0w Let me check
23:45 kr0w gluster pool list only shows the 2 nodes which I just barely installed
23:46 kr0w They should have the same version.
23:46 kr0w Both servers are 3.7.4
23:47 kr0w Client that I was trying is 3.4.1. Is there any possibility of getting it to work without trying to upgrade the client
23:47 kr0w I don't want to try and update fuse..
23:49 CyrilPeponnet nope toooooo old
23:49 CyrilPeponnet ex with my 3.6 I can old 3.5 clients
23:49 CyrilPeponnet *can't
23:50 * kr0w sighs.
23:50 kr0w At least I only have 5-6 clients
23:52 CyrilPeponnet or you can mount using nfs
23:54 kr0w If I use nfs do I have failover in the case 1 of the nodes is lost?
23:54 kr0w Maybe it would be easier to downgrade my servers to 3.4?
23:58 CyrilPeponnet sort of
23:58 CyrilPeponnet you can use vip as well
23:58 CyrilPeponnet with something like keepalived

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary