Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 rshott joined #gluster
00:11 FrankToil joined #gluster
00:12 buhman joined #gluster
00:13 FrankToil G'Day folks. I've got 3 geographically distributed Ubuntu boxes (with ZFS backends, if that matters) that I'm potentially interested in using geo-replication with gluster. Do you know of any success stories for such a network?
00:14 msmith_ joined #gluster
00:21 buhman rtorrent regularly hangs when I use glusterfs as its download directory--it is odd though because it's in state R.
00:21 glusterbot buhman: directory's karma is now -1
00:21 buhman O.o
00:22 buhman oh
00:22 justinmburrous joined #gluster
00:24 buhman how might I go about figuring out why rtorrent/gluster does this?
00:52 msmith_ joined #gluster
00:53 rjoseph joined #gluster
00:58 diegows joined #gluster
01:01 plarsen joined #gluster
01:04 mojibake joined #gluster
01:25 haomaiwa_ joined #gluster
01:27 mojibake left #gluster
01:27 mojibake joined #gluster
01:27 haomaiw__ joined #gluster
01:29 frayz joined #gluster
01:34 justinmburrous joined #gluster
01:38 harish joined #gluster
01:54 msmith_ joined #gluster
02:10 jiffin joined #gluster
02:22 overclk joined #gluster
02:29 calisto joined #gluster
02:34 justinmburrous joined #gluster
02:38 bharata-rao joined #gluster
02:46 DV joined #gluster
02:49 glusterbot New news from resolvedglusterbugs: [Bug 1139921] Improve debuggability of double unwinds <https://bugzilla.redhat.com/show_bug.cgi?id=1139921>
02:54 firemanxbr joined #gluster
02:54 msmith_ joined #gluster
03:07 gildub joined #gluster
03:12 justinmburrous joined #gluster
03:18 harish joined #gluster
03:27 kshlm joined #gluster
03:43 doekia joined #gluster
03:45 justinmburrous joined #gluster
03:45 nbalachandran joined #gluster
03:55 msmith_ joined #gluster
03:59 kdhananjay joined #gluster
04:00 atinmu joined #gluster
04:07 shubhendu joined #gluster
04:18 rafi1 joined #gluster
04:18 Rafi_kc joined #gluster
04:19 jiffin joined #gluster
04:19 rjoseph joined #gluster
04:20 jiffin1 joined #gluster
04:24 ppai joined #gluster
04:31 anoopcs joined #gluster
04:36 ndarshan joined #gluster
04:43 justinmburrous joined #gluster
04:46 kanagaraj joined #gluster
04:51 ramteid joined #gluster
04:51 nishanth joined #gluster
04:52 spandit joined #gluster
04:56 msmith_ joined #gluster
05:02 bala joined #gluster
05:04 jbrooks joined #gluster
05:06 alturic joined #gluster
05:08 sputnik13 joined #gluster
05:12 jbrooks joined #gluster
05:13 prasanth_ joined #gluster
05:14 justinmb_ joined #gluster
05:15 RameshN joined #gluster
05:43 atalur joined #gluster
05:44 deepakcs joined #gluster
05:55 saurabh joined #gluster
05:57 msmith_ joined #gluster
06:01 anands joined #gluster
06:05 nbalachandran joined #gluster
06:18 raghu joined #gluster
06:18 kumar joined #gluster
06:22 soumya joined #gluster
06:23 msvbhat joined #gluster
06:27 nshaikh joined #gluster
06:32 justinmburrous joined #gluster
06:35 anands joined #gluster
06:37 Fen2 joined #gluster
06:41 ctria joined #gluster
06:46 RaSTar joined #gluster
06:47 nbalachandran joined #gluster
06:48 lalatenduM joined #gluster
06:51 overclk joined #gluster
06:52 Fen2 Good morning :)
06:55 hybrid512 joined #gluster
06:56 Philambdo joined #gluster
06:58 msmith_ joined #gluster
07:01 Fen2 This morning i checked my VMs and i found a strange file => "filename".save !? Do you know what is it ?
07:03 R0ok_ Fen2: greetings
07:04 R0ok_ Fen2: please provide us with more info about the strange file
07:06 Fen2 -rw-r--r--  2 root root 100M sept. 16  2010 fichier-distribute-1
07:06 Fen2 -rw-r--r--  2 root root 100M oct.  10 17:31 fichier-distribute-1.save
07:06 glusterbot Fen2: -rw-r--r's karma is now -1
07:06 glusterbot Fen2: -rw-r--r's karma is now -2
07:06 Fen2 glusterbot: nope :p
07:09 hybrid512 joined #gluster
07:12 Fen2 i found this "fichier-distribute-1.save"
07:12 Slasheri joined #gluster
07:12 Slasheri joined #gluster
07:13 Fen2 and i have created fichier-distribute-1 this 10th october and not in september...
07:13 Fen2 lol september 2010... wtf !?
07:15 RaSTar joined #gluster
07:20 R0ok_ Fen2: ummmh...do you think that maybe an application accessing the file(fichier-distribute-1) did that ?
07:25 Fen2 i have asked to the person who have created my vm, maybe during the weekend they have been rebooted or backup. Maybe it's the reason why i have 16th september 2010
07:26 Slydder joined #gluster
07:27 Fen2 Client POV :
07:27 Fen2 -rw-r--r-- 1 root root 100M sept. 16  2010 fichier-distribute-1
07:27 Fen2 -rw-r--r-- 1 root root 100M oct.  10 17:31 fichier-distribute-1.save
07:27 Fen2 -rw-r--r-- 1 root root 100M oct.   9 14:14 fichier-distribute-2
07:27 Fen2 -rw-r--r-- 1 root root 100M oct.   9 14:14 fichier-distribute-3
07:27 glusterbot Fen2: -rw-r--r's karma is now -3
07:27 glusterbot Fen2: -rw-r--r's karma is now -4
07:27 glusterbot Fen2: -rw-r--r's karma is now -5
07:27 glusterbot Fen2: -rw-r--r's karma is now -6
07:28 Fen2 Server1 (brick 1) :
07:28 Fen2 -rw-r--r--  2 root root 100M sept. 16  2010 fichier-distribute-1
07:28 Fen2 -rw-r--r--  2 root root 100M oct.  10 17:31 fichier-distribute-1.save
07:28 Fen2 -rw-r--r--  2 root root 100M oct.   9 14:14 fichier-distribute-2
07:28 glusterbot Fen2: -rw-r--r's karma is now -7
07:28 glusterbot Fen2: -rw-r--r's karma is now -8
07:28 glusterbot Fen2: -rw-r--r's karma is now -9
07:29 Fen2 Server 2 (brick 2) :
07:29 Fen2 ---------T  2 root root    0 oct.  11 06:25 fichier-distribute-1.save
07:29 Fen2 -rw-r--r--  2 root root 100M oct.   9 14:14 fichier-distribute-3
07:29 glusterbot Fen2: -------'s karma is now -1
07:29 glusterbot Fen2: -rw-r--r's karma is now -10
07:30 Slydder Fen2: seriously?
07:30 Fen2 Slydder: what ?
07:31 Slydder why are you flooding the channel with that stuff instead of using pastebin and posting a link?
07:31 Fen2 sry
07:32 Slydder np. just wondering is all.
07:32 Fen2 Slydder: ok i take note
07:36 Fen2 Ok, i found it nvm :p it's about nano
07:37 anands1 joined #gluster
07:39 Fen2 left #gluster
07:39 Fen2 joined #gluster
07:48 Gorian joined #gluster
07:49 Gorian so, anyone know how to setup the permission on a GlusterFS based NFS server to use as storage for a hosted engine?
07:49 Gorian for oVirt
07:53 LebedevRI joined #gluster
07:58 msmith_ joined #gluster
08:11 Slydder ndevos: you there today?
08:14 aravindavk joined #gluster
08:16 Slydder anyone here can tell me how quick replication takes place in a geo setup?
08:22 social joined #gluster
08:23 jiffin joined #gluster
08:23 TvL2386 joined #gluster
08:39 prasanth_ joined #gluster
08:42 deepakcs joined #gluster
08:44 mat1010 joined #gluster
08:45 mat1010 hi @ll does it make sense to use glusterfs with only 2 nodes - instead of a rsync / lsync / drbd solution?
08:46 gildub joined #gluster
08:47 Slashman joined #gluster
08:50 Fen2 mat1010: yeah it make sense because there are some great features with glusterfs :)
08:51 nshaikh joined #gluster
08:51 mat1010 thank you :-) - that's what I wanted to hear :-D
08:51 Fen2 and you can start with 2 nodes and few times later, you can easily can add capacity or/and performance
08:58 kanagaraj joined #gluster
08:59 msmith_ joined #gluster
09:00 Slydder mat1010: there are a lot of points to take into consideration. however, my personal experience shows look first to gluster as you solution and then elsewhere if gluster doesn't fit. I have used drbd, ceph, lsync w/ csync, rsync and moose (tested only) and have found gluster to be more of an all around fit for just about any situation with great performance (when your infrastructure can support it) as well as the possibility to trad
09:00 haomaiwa_ joined #gluster
09:01 rjoseph joined #gluster
09:04 mat1010 I think about to use it for a simple data synchronisation between two storage nodes. The data is almost static and gets only updated a few times a day through manual deployments. I always used lsync for such jobs - but lsync was always bad when it came to a huge amount of files.
09:04 Slydder the only 2 things i dislike about gluster is 1. not being able to assign a master server when using normal replication. such an option would alleviate inter-node coms which would allow you to better control the traffic and resources you have. 2. documentation and performance information in certain circumstances is rare.
09:04 vimal joined #gluster
09:05 Slydder mat1010: then do a simple geo-replication install. it will ensure that you network doesn't get flooded with updates and is more forgiving than a standard replication. I use it sometimes inside the same subnet when I only have 100 MB lines. less stress on the infrastructure.
09:06 edward1 joined #gluster
09:06 Slydder replication time is the only thing I'm not too sure about. have yet to find any info on it.
09:07 Fen2 For the performance : https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf
09:07 Fen2 For the materiel : https://access.redhat.com/articles/66206
09:07 glusterbot Title: Red Hat Storage Server 3.0 Compatible Physical, Virtual Server and Client OS Platforms - Red Hat Customer Portal (at access.redhat.com)
09:07 Fen2 For the administration : https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/index.html
09:07 glusterbot Title: Administration Guide (at access.redhat.com)
09:07 Slydder Fen2: that is for standard replication not geo
09:08 Fen2 Slydder: yeah i know but for geo i depend on a lot of things so i guess there is no doc
09:09 Fen2 *it
09:09 Slydder Fen2: on top of that the links are to old versions of gluster so not a lot of help. a lot of the options, features and performance has changed since those releases.
09:09 Fen2 which one ?
09:09 bharata_ joined #gluster
09:10 Slydder 3.0 and 3.1 are way to old. even the 3.2 docs direct from gluster are no longer valid. you have to read the 3.5 docs in git. and still geo rep is not very will covered.
09:11 Fen2 Slydder: yeah but it's the latest complet guide i have found, other are not very complet
09:12 Slydder my point exactly
09:13 Slydder just like I found out today that NFS and shd will not start unless a minimum of 2 nodes are up in a replication. not good in a 2 node setup where 1 node dies and the mount point uses nfs instead of fuse.
09:16 Fen2 Slydder: the guide has been revisited the 25th september 2014 so it's quite recent
09:42 rjoseph joined #gluster
09:43 haomaiwa_ joined #gluster
09:51 dusmant joined #gluster
09:52 ACiDGRiM joined #gluster
09:53 anands joined #gluster
10:00 msmith_ joined #gluster
10:01 haomai___ joined #gluster
10:03 haomaiwang joined #gluster
10:17 ctria joined #gluster
10:18 nshaikh joined #gluster
10:18 haomai___ joined #gluster
10:29 karnan joined #gluster
10:31 haomaiwa_ joined #gluster
10:34 haomaiw__ joined #gluster
10:43 FrankToil joined #gluster
10:49 FrankToil joined #gluster
10:56 giannello joined #gluster
11:01 msmith_ joined #gluster
11:12 tiglog joined #gluster
11:12 virusuy joined #gluster
11:14 nshaikh joined #gluster
11:14 RameshN joined #gluster
11:28 dusmant joined #gluster
11:28 harish_ joined #gluster
11:32 atinmu joined #gluster
11:38 diegows joined #gluster
11:40 rgustafs joined #gluster
11:51 shubhendu joined #gluster
11:57 calum_ joined #gluster
12:01 Fen1 joined #gluster
12:02 msmith_ joined #gluster
12:02 ira joined #gluster
12:13 rolfb joined #gluster
12:16 rjoseph joined #gluster
12:19 mojibake joined #gluster
12:23 juhaj joined #gluster
12:24 JamesG joined #gluster
12:24 JamesG joined #gluster
12:24 _NiC joined #gluster
12:24 mrEriksson joined #gluster
12:24 Diddi joined #gluster
12:25 guntha_ joined #gluster
12:25 sickness joined #gluster
12:27 DJClean joined #gluster
12:28 mat1010 joined #gluster
12:34 shubhendu joined #gluster
12:40 nshaikh joined #gluster
12:44 calisto joined #gluster
12:52 firemanxbr joined #gluster
12:52 plarsen joined #gluster
12:54 msmith_ joined #gluster
12:54 ctria joined #gluster
13:03 ekuric joined #gluster
13:05 5EXAA8QFL joined #gluster
13:09 deepakcs joined #gluster
13:26 sputnik13 joined #gluster
13:27 ira joined #gluster
13:36 Bardack joined #gluster
13:39 JustinClift Slydder: With the missing info for geo-rep, would you be ok to ask on the gluster-devel mailing list for someone to fix it?  Something along the lines of "this is my situation XYZ, I'm expecting the geo-rep docs to cover that, but it doesn't seem to have the needed info about ABC.  Can someone please fix that?"
13:40 JustinClift Slydder: Missing info is generally because the developers for a feature are so into the tech, they don't really have a good view of what's needed by someone not as into it.
13:41 JustinClift So, it can take a bit of guidance/encouragement from real users + trying-to-be-users for them to get the docs in solid shape. ;)
13:43 nshaikh joined #gluster
13:48 julim joined #gluster
13:53 msmith_ joined #gluster
13:54 Gib_adm Gluster Storage Platform still alive?
14:01 JustinClift Gib_adm: Don't think so.
14:01 JustinClift Gib_adm: I remember asking similar ah
14:01 JustinClift ges
14:02 Gib_adm rhes?
14:02 JustinClift ages ago when first getting into Gluster.  I think it was a specialised pacaked version or something, around GlusterFS 3.1 or 3.2 days
14:03 JustinClift rhes?  Google is showing old references to that
14:04 Gib_adm Ok. I need glusterfs + web management console. * rhes - Red Hat Enterprice Storage
14:04 JustinClift Ugh.  I'm still sick.  I'm better get off the cpmuter. :(
14:04 JustinClift Gib_adm: oVirt might be what you're after
14:04 JustinClift Ask on gluster-users mailing list. :)
14:04 coredump joined #gluster
14:05 * JustinClift gets off computer
14:13 firemanxbr joined #gluster
14:13 jbautista- joined #gluster
14:21 calisto1 joined #gluster
14:22 plarsen joined #gluster
14:23 jobewan joined #gluster
14:25 lmickh joined #gluster
14:26 kshlm joined #gluster
14:27 bigred15 joined #gluster
14:27 bigred15 is there any auto storage tiering in gluster?
14:46 _Bryan_ joined #gluster
14:58 bennyturns joined #gluster
15:14 abyss^^ joined #gluster
15:17 julim joined #gluster
15:19 mojibake Can someone help me diagnose what happened? http://ur1.ca/id2ha I was running Apachebench, and not even heavy hitting yet and Gluster dropped out on me.
15:19 glusterbot Title: #141461 Fedora Project Pastebin (at ur1.ca)
15:20 plarsen joined #gluster
15:22 mojibake I have not attempted to restart anything other than Apache, which says that DocumentRoot must be a directory. (Because Gluster not available.) gluster volume status shows online.
15:24 R0ok_ mojibake: what if you remount the volume on the client side?
15:27 mojibake I could try..But hoping to get a learning expereince out of what went wrong.
15:28 mojibake Luckly this is not production yet..But if I can not even do some testing of it, it maybe never become production.
15:29 RioS2 mojibake, have you by change evaluated ceph?
15:30 mojibake Not yet. Considered it, did a brief look at CEPH.
15:30 RioS2 was wondering what you thought of it...
15:31 RioS2 you can do either block or object store with the rados gateway
15:31 R0ok_ mojibake: are you using socket as transport type for that volume? can you provide use with the volume info: gluster volume info <<VOLNAME>>
15:31 mojibake Volume Name: web-content
15:31 mojibake Type: Replicate
15:31 mojibake Volume ID: 17e826c2-458d-43ea-b596-4c8680739e06
15:31 mojibake Status: Started
15:31 mojibake Number of Bricks: 1 x 2 = 2
15:31 mojibake Transport-type: tcp
15:31 mojibake Bricks:
15:31 mojibake Brick1: 172.31.19.50:/export/gfsvol1/brick1/brick
15:31 mojibake Brick2: 172.31.25.186:/export/gfsvol1/brick2/brick
15:33 mojibake Will attempt to remount and see what happens.
15:37 mojibake At least according to "mount", it was mounted already. Did a umount and remounted with mount -a. Looks like it remounted...
15:39 mojibake Any suggestions for turning up some debugging if it happens again? Again busy trying to do some basic apache benching. only did "ab -n500 -c 10 hostip" and kapluey.
15:40 ira joined #gluster
15:43 FrankToil joined #gluster
15:44 plarsen joined #gluster
15:44 kshlm joined #gluster
15:46 RameshN joined #gluster
15:50 mojibake Can anyone comment about error in fpaste http://ur1.ca/id2ha regarding Server lk version-numbers not the same?
15:50 glusterbot Title: #141461 Fedora Project Pastebin (at ur1.ca)
15:50 mojibake Error? Warning? Fatal?
15:59 nbalachandran joined #gluster
15:59 kumar joined #gluster
16:04 Slydder joined #gluster
16:04 Slydder hey all
16:07 alturic joined #gluster
16:08 julim joined #gluster
16:09 alturic hey guys, I'm trying to connect to a gluster by   mount -t glusterfs PRIVATELOCALIP:/volume /mount/point and on the client it fails. Now, in iptables I have the privatelocalip of the client allowed via ports, etc but iptables on the SERVER is blocking connections/receiving connections from the clients PUBLIC ip. Is there any reason the client is trying to connect via it's public IP versus
16:09 alturic local?
16:10 mojibake Sounds like a routing table issue.
16:11 alturic Ahh you know what... does the gluster server try DNS resolving the hostname regardless of the client connecting via it's private ip?
16:12 JoeJulian @mount server
16:12 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
16:12 JoeJulian So the client connects to whatever servers you have specified in the volume definition.
16:13 mojibake ohh, good to know. So when creating volumes, advisable to use the private IPS, if that is the behavior that you want?
16:13 JoeJulian No
16:13 JoeJulian It's advisable to use hostnames and split-horizon name resolution.
16:14 mojibake OK.
16:14 JoeJulian Otherwise when you decide to renumber your network down the line, you'll have all sorts of hassles changing addresses, whereas hostnames can just resolve differently.
16:15 alturic mojibake: Yea, hopefully people aren't using the public side of their machine to connect with gluster. Didn't know Gluster would always base it on hostnames?
16:16 ctria joined #gluster
16:17 mojibake alturic: I am still a newbie which is why I hang around this IRC to pick up nuggets of knowledge from people like JoeJulian
16:17 mojibake JoeJulian++
16:17 glusterbot mojibake: JoeJulian's karma is now 13
16:17 alturic mojibake - Oh, I've only dabbled with NFS before. Just started playing with gluster last night. lol
16:20 sputnik13 joined #gluster
16:21 mojibake RioS2: R0ok_:  Back to my problem... the following found in messages:
16:21 mojibake Oct 13 15:00:33 ip-172-31-27-153 kernel: [ 6690.228302] httpd invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
16:21 mojibake Skip lines to good stuff
16:21 mojibake Oct 13 15:00:33 ip-172-31-27-153 kernel: [ 6690.873841] Out of memory: Kill process 1509 (glusterfs) score 27 or sacrifice child
16:21 mojibake Oct 13 15:00:33 ip-172-31-27-153 kernel: [ 6690.877969] Killed process 1509 (glusterfs) total-vm:302076kB, anon-rss:28132kB, file-rss:28kB
16:21 mojibake mystery solved.
16:22 JoeJulian Yeah, that would do it.
16:22 mojibake Hmm, any advice from preventing that again other tweaking the httpd conf maxclients?
16:22 mojibake cgroups?
16:23 semiosis add ram?
16:23 JoeJulian That might be a memory leak. I know there's at least one that hasn't been found yet.
16:23 dtrainor joined #gluster
16:24 mojibake Well of course.. But would like to think there is some method to tell the system that gluster client is important..
16:24 JoeJulian Monitor the memory usage and remount if it gets too low.
16:24 semiosis JoeJulian: i think i may have found a reproducible error with my java du benchmarking
16:24 semiosis maybe memory leak related
16:25 JoeJulian I know pranithk has bee looking for it, but hasn't been able to find it yet. He even opened a discussion on -devel on how to change the allocation process to better track what's locking memory from the pool.
16:26 mojibake Well considering I was load testing, and I know what the actually problem was now, I feel better. It is an EC2 instance, so will see about some more RAM while testing so this is not an issue again, and now what to look out for when the client drops out again.
16:26 firemanxbr joined #gluster
16:27 jbrooks joined #gluster
16:35 thermo44 joined #gluster
16:48 Slydder hey all.
16:49 Slydder I brought a new 1 brick x 2 node replication online today and started to fill it from a local directory using rsync. about half way through the server hangs. no syslog entries our anything to go on. any ideas? this is gfs 3.5.2 on debian wheezy.
16:50 Slydder am a little worried about starting up the rsync process again without an idea of what could be happening.
16:55 stickyboy Slydder: Attach to the rsync process with strace... does it show it doing anything?
16:57 rolfb joined #gluster
16:59 semiosis Slydder: using fuse or nfs client?
17:04 chirino joined #gluster
17:08 ctria joined #gluster
17:23 Slydder stickyboy: it transfers as it should. just after a while the server locked up. and I am disinclined to think it was rsync that locked up the server. I have never had that in 25+ years in IT.
17:23 Slydder semiosis: nfs mounted
17:24 semiosis use fuse
17:24 semiosis localhost nfs mounts can deadlock the kernel under load
17:25 Slydder ok. nice to know. i take it this is a known problem. is there a fix in the works or is it one of those things we just have to live with.
17:25 Slydder ?
17:25 semiosis it's a known problem with NFS
17:25 semiosis not even particular to gluster-nfs afaik
17:26 semiosis http://lwn.net/Articles/595652/
17:26 glusterbot Title: Loopback NFS: theory and practice [LWN.net] (at lwn.net)
17:28 Slydder hmmm. was kinda hoping to avoid fuse
17:28 semiosis @learn localhost nfs as http://lwn.net/Articles/595652/
17:28 glusterbot semiosis: The operation succeeded.
17:31 ACiDGRiM joined #gluster
17:38 bene joined #gluster
17:38 mojibake semiosis: Are you teaching glusterbot some keywords or something?
17:42 semiosis mojibake: yes ,,(localhost nfs)
17:42 glusterbot mojibake: http://lwn.net/Articles/595652/
17:43 mojibake Interesting..
17:47 Slydder semiosis: ok. so patches have been submitted this year and the situation is being addressed. leaves me with a positive feeling regarding a favorable outcome to the problem. thanks for the info.
17:48 cfeller joined #gluster
17:48 semiosis yw
17:49 Slydder It would also seem to be prudent on the part of the GFS project to warn about using NFS as a mount option when expecting a heavy load as long as server and client are on the same host.
17:49 semiosis where would you like to see that warning?
17:50 Slydder probably in the "How to mount" section.
17:50 semiosis link?
17:53 Slydder This bug caught me on a production system today. admittedly this is not a bug in gluster, still it does have a major impact in certain situations. I was thinking somewhere in here: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_settingup_clients.md
17:53 glusterbot Title: glusterfs/admin_settingup_clients.md at master · gluster/glusterfs · GitHub (at github.com)
17:54 Slydder specifically here: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_settingup_clients.md#nfs
17:54 glusterbot Title: glusterfs/admin_settingup_clients.md at master · gluster/glusterfs · GitHub (at github.com)
17:55 JoeJulian Slydder: Could you please file a bug report to that expectation.
17:55 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:57 vimal joined #gluster
17:58 rolfb joined #gluster
17:59 MacWinner joined #gluster
18:09 lalatenduM joined #gluster
18:11 Slydder JoeJulian: bug submitted.
18:12 Slydder that's it for me tonight. see you all later.
18:12 Slydder left #gluster
18:14 glusterbot New news from newglusterbugs: [Bug 1152265] Documentation Update to Warn about localhost NFS mounts <https://bugzilla.redhat.com/show_bug.cgi?id=1152265>
18:27 Gorian hey, anyone here?
18:28 semiosis hi
18:28 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:28 semiosis Gorian: ^^
18:29 Gorian lol, tried that already. Now it's a newday and I still haven't got a response ;)
18:30 Gorian so, lots and lots of IRCs have that policy - but they very rarely live up to the expectation that they'll see your question a day later and then respond. Thus better to make sure there are actually before online and paying attention before wasting my time ;)
18:31 theron joined #gluster
18:31 Gorian anyway, what does oVirt install on gluster hosts to manage them? I have a cluster of 2 gluster servers, and tried to add it to oVirt, and it just says "Install Failed"... but I have no clue what it was actually trying to install in order to fix it
18:31 Gorian and, would it be an issue with the host being Ubuntu instead of a RHEL based OS, or the fact that it's 32-bit?
18:36 semiosis Gorian: better to repeat your question than ask if anyone is around
18:36 Gorian then I sound like a broken record ;)
18:36 semiosis thats fine
18:37 Gorian man, if you want one of those, you can go buy it :D
18:37 Gorian anyone, if you see above, I asked my question
18:37 Gorian *anyway
18:38 semiosis have you tried asking ovirt people?
18:38 rolfb joined #gluster
18:38 rotbeard joined #gluster
18:39 Gorian yup. Channel is dead right now
18:39 Gorian they helped me last night when no one was on here :P
18:40 rolfb_ joined #gluster
18:43 dtrainor joined #gluster
18:51 zerick joined #gluster
18:56 frayz joined #gluster
19:00 dtrainor joined #gluster
19:06 JoeJulian Gorian: This just in from the mailing list. Sounds like it might be germane: I didn't have the ovirt repositories installed the target Gluster machines.  Once I installed yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm , I was successfully able to add the bricks to oVirt.
19:07 Gorian so, it's an ubuntu issues then
19:08 Gorian yeah, finally got a reply in the oVirt channel and they basically said the same thing
19:08 Gorian so, I'll try to move my cluster to CentOS 6 then
19:09 aulait joined #gluster
19:10 theron joined #gluster
19:17 semiosis so the answer to "what does oVirt install on gluster hosts to manage them?"... is oVirt?
19:17 radoslav joined #gluster
19:19 Gorian not necessarily? Just CentOS specific packages? lol
19:19 Gorian or maybe? I have no clue still. lol
19:21 semiosis does ovirt work on ubuntu?
19:21 Gorian kinda, seems to be the answer. http://www.ovirt.org/Ovirt_build_on_debian/ubuntu
19:21 glusterbot Title: Ovirt build on debian/ubuntu (at www.ovirt.org)
19:22 Gorian but, there is also a big difference between having to install guest tools and the actual management/hypervisor software....
19:22 Gorian thus my question
19:36 julim joined #gluster
19:36 theron joined #gluster
20:05 Telsin it's a plug in for the vdsm hyperviser manager, vdsm-gluster is the package name on rhel
20:05 Telsin basically talks "gluster vol" commands, as far as I can tell
20:23 ctria joined #gluster
20:49 brettnem joined #gluster
20:50 brettnem hey all
20:50 brettnem I’m tring a glusterd install on centos 7, which is kinda new to me
20:50 brettnem starting the daemon seems to work, but says address is already in use and peer probing isn’t working
20:51 dtrainor joined #gluster
20:52 brettnem nevermind.. selinux.. (doh)
20:53 JoeJulian Please file a bug for that. selinux should not prevent that.
20:53 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:54 lmickh joined #gluster
20:55 dtrainor_ joined #gluster
20:57 dtrainor joined #gluster
21:10 theron_ joined #gluster
21:24 marcoceppi joined #gluster
21:33 longshot902 joined #gluster
21:38 coredump joined #gluster
21:42 B21956 joined #gluster
21:56 longshot902_ joined #gluster
21:57 dtrainor joined #gluster
22:16 msmith_ joined #gluster
22:49 davemc Just a reminder: GlusterFS bug prioritization meeting Tuesday 14-Oct-2014, 12:00 UTC. More information available at : http://blog.gluster.org/2014/10/whats-that-bug-to-you-glusterfs-bug-priority-meeting/
23:01 dtrainor joined #gluster
23:02 B21956 left #gluster
23:03 gildub joined #gluster
23:10 B21956 joined #gluster
23:11 B21956 joined #gluster
23:12 ira joined #gluster
23:17 verdurin joined #gluster
23:29 Diddi joined #gluster
23:29 Gib_adm joined #gluster
23:40 rjoseph joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary