Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 bala joined #gluster
00:23 gildub joined #gluster
00:46 tdasilva joined #gluster
00:54 PeterA joined #gluster
01:09 sputnik13 joined #gluster
01:16 tiglog joined #gluster
01:27 Freman totally just crashed gluster :D
01:27 Freman not suitable for 5000+ file updates per minute
01:45 JoeJulian Freman: Not an entirely accurate statement.
01:45 JoeJulian But that's for an engineer to build for, not an admin to try to work around.
02:17 haomaiwa_ joined #gluster
02:31 nishanth joined #gluster
02:33 haomai___ joined #gluster
02:34 haomai___ joined #gluster
02:34 haoma____ joined #gluster
02:40 vu joined #gluster
02:41 vu joined #gluster
03:07 XpineX_ joined #gluster
03:13 anoopcs joined #gluster
03:21 anoopcs joined #gluster
03:39 haomaiwa_ joined #gluster
03:39 kshlm joined #gluster
03:41 haomaiwang joined #gluster
03:50 bharata-rao joined #gluster
03:57 shubhendu joined #gluster
04:03 itisravi joined #gluster
04:08 hagarth joined #gluster
04:09 haomaiwa_ joined #gluster
04:10 ndarshan joined #gluster
04:10 spandit joined #gluster
04:10 kanagaraj joined #gluster
04:14 nbalachandran joined #gluster
04:36 rafi1 joined #gluster
04:36 Rafi_kc joined #gluster
04:38 anoopcs joined #gluster
04:39 nbalachandran joined #gluster
04:41 aravindavk joined #gluster
04:42 glusterbot New news from resolvedglusterbugs: [Bug 1125134] Not able to start glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=1125134>
04:49 ramteid joined #gluster
04:58 atinmu joined #gluster
05:00 sputnik13 joined #gluster
05:01 kumar joined #gluster
05:07 kdhananjay joined #gluster
05:10 srj007 joined #gluster
05:18 side_con1rol joined #gluster
05:30 dusmant joined #gluster
05:32 jiffin joined #gluster
05:34 glusterbot New news from newglusterbugs: [Bug 1133073] High memory usage by glusterfs processes <https://bugzilla.redhat.com/show_bug.cgi?id=1133073> || [Bug 1142052] Very high memory usage during rebalance <https://bugzilla.redhat.com/show_bug.cgi?id=1142052>
05:35 ppai joined #gluster
05:36 side_control joined #gluster
05:43 nbalachandran joined #gluster
05:46 R0ok_ joined #gluster
05:48 shubhendu joined #gluster
05:50 ndarshan joined #gluster
05:51 nishanth joined #gluster
05:51 prasanth_ joined #gluster
05:52 hagarth joined #gluster
05:53 karnan joined #gluster
05:54 deepakcs joined #gluster
05:56 RameshN joined #gluster
05:57 RameshN_ joined #gluster
05:59 jtux joined #gluster
06:03 soumya_ joined #gluster
06:08 RaSTar joined #gluster
06:09 ndarshan joined #gluster
06:09 nishanth joined #gluster
06:10 shubhendu joined #gluster
06:11 atalur joined #gluster
06:11 harish_ joined #gluster
06:13 lalatenduM joined #gluster
06:30 meghanam joined #gluster
06:30 meghanam_ joined #gluster
06:30 nshaikh joined #gluster
06:40 bala joined #gluster
06:50 UnwashedMeme joined #gluster
06:54 raghu joined #gluster
07:01 MickaTri joined #gluster
07:07 overclk joined #gluster
07:09 overclk joined #gluster
07:13 kshlm joined #gluster
07:22 atinmu joined #gluster
07:23 hagarth joined #gluster
07:27 sputnik13 joined #gluster
07:35 kaushal_ joined #gluster
07:40 Zordrak Morning. I'm looking for some help working out why gluster's NFS server won't start up on CentOS7
07:41 Zordrak Having configured gluster and had it working in order to serve NFS to oVirt; following a reboot the NFS service is the only part that fails
07:41 Zordrak It appears to complain about failing to register with portmap: E [rpcsvc.c:1314:rpcsvc_program_register_portmap] 0-rpc-service: Could not register with portmap
07:50 delhage joined #gluster
07:57 delhage joined #gluster
07:59 atinmu joined #gluster
08:01 ekuric joined #gluster
08:03 liquidat joined #gluster
08:22 hagarth joined #gluster
08:46 Pupeno joined #gluster
08:50 shubhendu joined #gluster
08:53 nishanth joined #gluster
08:54 bala joined #gluster
08:56 richvdh joined #gluster
09:10 haomaiwa_ joined #gluster
09:10 vikumar joined #gluster
09:31 rgustafs joined #gluster
09:36 KenShiro|MUPF joined #gluster
09:36 Zordrak The problem is that on CentOS7, the default nfs is registering with rpcbind on boot/startup even though it's not being started, and this prevents gluster from registering with rpcbind.
09:36 Zordrak It works if you delete the NFS registration and then restart glusterd
09:36 Zordrak Any thoughts on how I can prevent the initial default registration?
09:38 ndevos Zordrak: something like this maybe? systemctl disable nfs-server.service && systemctl stop nfs-server.service
09:43 Zordrak the service already is stopped and disabled
09:44 ndevos Zordrak: oh, maybe you have a nfs client on that storage server too? The Linux kernel nfs-client will load the lockd kernel module that registers itself at portmap/rpcbind too
09:45 ndevos Zordrak: you can check with the rpcinfo command what rpc-programs are registered, check that before you start the gluster/nfs server
09:45 Zordrak well certainly an nfs client as it's used to mount gluster
09:46 Zordrak nfs, nfs_acl, nlockmgr{1,3,4}.. all come up
09:50 shubhendu joined #gluster
09:50 ndevos nfs-clients and the gluster nfs-server can not reliably work together on the same server, for mounting you should rather use fuse and not nfs on 'localhost'
09:50 ndevos or, you can mount the nfs export with 'nolock', that is the RPC program that conflicts...
09:51 nishanth joined #gluster
09:52 ndevos but of course, if you mount with nolock, locking over NFS will not work :)
09:52 Zordrak yar.. plus it's not manually specified, it's ovirt (RHEV) mounting a remote resource
09:53 Zordrak 10.10.10.254:engine on /rhev/data-center/mnt/10.10.10.254:engine type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.10.10.254,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=10.10.10.254)
09:54 Zordrak ctdb hosts 10.10.10.254 across two virt hosts and engine is then served by gluster's nfs
09:55 Zordrak (ovirt cant use gluster directly for this >.< must be NFS)
09:56 Zordrak So, as soon as the server boots, it connects up the NFS from the other box which is still up and therefore holds the ctdb ip address
09:56 Zordrak Then it tries to start its own gluster process
09:56 Zordrak and fails
09:56 ndevos hmm, I thought ovirt could use a 'posix filesystem' for that? if you specify glusterfs as type, it should mount?
09:56 Zordrak i dont think so for this instance, it's a HostedEngine install
09:57 Zordrak the only option presented is NFS and I'm sure a dev also confirmed the same a few days ago
09:57 ndevos oh, that could be, I'm not much aware of the different exports that ovirt uses
09:58 Zordrak I could add the rpcinfo -d calls to the init of gluster... :-D
09:58 Zordrak not the cleanest solution
09:58 ndevos and it may break the nfs-client locking
10:00 harish_ joined #gluster
10:00 ndevos I guess you could ask in some ovirt channel (or on a list) about mounting a filesystem for HostedEngine, maybe you can pass some filesystem type somewhere...
10:01 ndevos you can also ask that question on gluster-users@gluster.org, there are some guys on that list that know more about ovirt
10:01 Zordrak perhaps.. but alternatively, if I can get glusterd to start before the client mount, perhaps that would be fine?
10:03 ndevos nfs requires (both server and client) to register a locking service at portmap/rpcbind, only one of gluster-nfs or linux-nfs-client can register, the other will not function correctly
10:06 Zordrak hm.
10:09 Zordrak this must have been dealt with before.. the mechanism for running HA-HostedEngine using gluster has been previously deployed with success - I will take this info back to the RH-dev i was working with and discuss. Thanks for the input
10:14 edward1 joined #gluster
10:15 ndevos okay, glad it helps a little, please let us know how you get it solved :)
10:16 XpineX__ joined #gluster
10:19 Zordrak will do
10:19 ndevos thanks!
10:26 rjoseph joined #gluster
10:26 rajesh joined #gluster
10:34 ramteid joined #gluster
10:34 vincent_vdk joined #gluster
10:37 bala joined #gluster
10:46 kkeithley1 joined #gluster
10:52 tru_tru joined #gluster
10:55 hagarth joined #gluster
11:03 jiffin1 joined #gluster
11:21 chirino joined #gluster
11:22 bene2 joined #gluster
11:30 diegows joined #gluster
11:32 jiffin joined #gluster
11:46 shubhendu joined #gluster
11:46 azar joined #gluster
11:48 azar Hi everyone, I need some help about glusterfs source code. I can not understand the variable "first_free" in "_fdtable structure". what is the use of it?
11:52 tom[] joined #gluster
11:59 LebedevRI joined #gluster
12:02 soumya_ joined #gluster
12:03 meghanam_ joined #gluster
12:03 meghanam joined #gluster
12:05 Slashman joined #gluster
12:09 dusmant joined #gluster
12:12 Slashman joined #gluster
12:13 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
12:15 srj007 joined #gluster
12:16 chirino joined #gluster
12:19 MickaTri joined #gluster
12:21 MickaTri Hi, have you heard about Sheepdog ?
12:22 lalatenduM MickaTri, yup,
12:22 MickaTri Is it like glusterfs ?
12:22 MickaTri Or maybe a little bit basic
12:23 lalatenduM MickaTri, yup , u r right it is a DFS
12:23 _Bryan_ joined #gluster
12:24 itisravi joined #gluster
12:26 MickaTri but i can't tell if some people are always working on it or not...
12:30 lalatenduM MickaTri, https://github.com/sheepdog/sheepdog does not look bad
12:30 glusterbot Title: sheepdog/sheepdog · GitHub (at github.com)
12:31 pkoro joined #gluster
12:33 MickaTri thx ;)
12:33 LHinson joined #gluster
12:44 shubhendu joined #gluster
12:48 B21956 joined #gluster
12:48 soumya joined #gluster
12:58 srj007 joined #gluster
13:01 MickaTri Why glusterfs is better than ceph ?
13:02 nishanth joined #gluster
13:05 nbalachandran joined #gluster
13:05 capri MickaTri, it isnt better, its complete different - if you need filesystem use gluster - blockstorage use ceph
13:05 shubhendu joined #gluster
13:06 MickaTri To be used with Proxmox  exactly ?
13:06 MickaTri Not just alone
13:07 hagarth joined #gluster
13:08 rjoseph joined #gluster
13:10 nbalachandran joined #gluster
13:11 nbalachandran joined #gluster
13:13 harish_ joined #gluster
13:13 bjornar joined #gluster
13:14 saurabh joined #gluster
13:14 capri MickaTri, for proxmox and virtualization i would use gluster and nfs
13:14 MickaTri and thx i was my first idea :D
13:18 bennyturns joined #gluster
13:19 elico joined #gluster
13:20 ekuric joined #gluster
13:23 deeville joined #gluster
13:25 bennyturns joined #gluster
13:34 julim joined #gluster
13:36 shubhendu joined #gluster
13:37 tdasilva joined #gluster
13:39 nshaikh joined #gluster
13:44 _Bryan_ joined #gluster
13:44 jmarley joined #gluster
13:47 kumar joined #gluster
13:50 failshell joined #gluster
13:51 failshell joined #gluster
13:51 xleo joined #gluster
13:53 jdarcy joined #gluster
13:53 justyns joined #gluster
13:54 justyns joined #gluster
13:55 justyns joined #gluster
13:58 wushudoin| joined #gluster
14:00 ekuric1 joined #gluster
14:01 toti joined #gluster
14:01 ekuric1 joined #gluster
14:13 coredumb hello folks
14:13 coredumb anyone using gluster NFS behinf haproxy here ?
14:13 plarsen joined #gluster
14:15 mojibake joined #gluster
14:15 ekuric joined #gluster
14:16 KenShiro|MUPF I have a 3 brick cluster, each brick on a different server
14:17 KenShiro|MUPF if I lauch the remove-brick operation on a brick, all the data contained in this brick will be rebalanced between the 2 remaning bricks ?
14:19 bennyturns joined #gluster
14:19 KenShiro|MUPF and : http://toruonu.blogspot.fr/2012/12/xfs-vs-ext4.html is still relevant ?
14:19 glusterbot Title: Ramblings on IT and Physics: XFS vs EXT4 (at toruonu.blogspot.fr)
14:19 KenShiro|MUPF (gluster 3.5.2 here)
14:20 itisravi joined #gluster
14:21 kkeithley_ yes, the remove brick will start a rebalance. (Don't shut the server off until the rebalance finishes.)
14:22 KenShiro|MUPF ok thx kkeithley
14:23 ndevos coredumb: what kind of ha-proxy for nfs?
14:24 coredumb ndevos: haproxy in front of my gluster servers listening and forwarding port 2049 and 111
14:25 coredumb but seems something's not sufficient
14:25 ndevos ~ports | coredumb
14:25 glusterbot coredumb: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
14:25 ndevos there are others ports you need, like for the mountd service
14:26 ndevos and lockd and such, see 'rpcinfo -p' for what ports are used
14:27 coredumb ndevos: ok i see let me try this
14:32 coredumb thx ndevos :)
14:32 KenShiro|MUPF ndevos : is the ext4 bug still remaining ?
14:33 harish joined #gluster
14:35 ndevos KenShiro|MUPF: is should have been solved in recent versions of glusterfs
14:36 ndevos @ext4 bug
14:36 meghanam_ joined #gluster
14:36 meghanam joined #gluster
14:36 ndevos well, maybe glusterfs doesnt know about it...
14:39 soumya joined #gluster
14:41 plarsen joined #gluster
14:46 jiku joined #gluster
14:46 harish joined #gluster
14:49 coredumb ndevos: on of my node runs NLM on 38468 but the other one on 59579
14:49 coredumb is there a way to force it on 38468 ?
14:50 ndevos coredumb: gluster NLM normally tries to run on 38468, port 59579 might be in use by a nfs-client (lockd kernel module)
14:51 coredumb oh yeah true
14:51 coredumb still it's not running on 28468
14:51 coredumb 38*
14:51 coredumb :/
14:52 ndevos well, only one NLM service can register at portmap/rpcbind, either linux-kernel-nfs(lockd) or gluster/nfs
14:52 soumya fire
14:52 ndevos fire?
14:52 soumya sorry..its a mistake..wrong window
14:52 ndevos ah, now we all know your password :D
14:53 soumya :D .. if it was actually that :P
14:53 ndevos hehe
14:55 coredumb ndevos: can i try to restart the gluster nfs stack ?
14:57 ndevos coredumb: sure, kill the process (check the PID with 'gluster volume status') and use 'gluster volume start $SOME_VOLUME force'
14:57 KenShiro|MUPF ndevos : thx (sorry was afk)
14:57 coredumb ndevos ok
14:57 KenShiro|MUPF last question for today, do you know the diff between remove-brick start and commit
14:58 KenShiro|MUPF you perform first the start and then the commit ?
15:00 ndevos I think in current (newer?) versions 'remove-brick start' has been removed/deprecated, but I'm not 100% sure, its not something I regulary use
15:00 KenShiro|MUPF ok, I launched with start command and operation have began
15:00 jvdm joined #gluster
15:03 coredumb ndevos: gosh works on one but not on the other one through haproxy -_-
15:03 daMaestro joined #gluster
15:08 edong23_ gluster in centos 7? viable?
15:08 edong23_ anyone used it?
15:08 edong23_ or shoudl i stick with 6.x?
15:09 ndevos coredumb: how do you mean? what is not working?
15:09 coredumb ndevos: the mount from the client fails with RPC erro
15:09 coredumb r
15:09 ndevos edong23_: some people seem to run it on centos-7
15:10 edong23_ ok. i didnt know if there were any caveats
15:10 edong23_ hav eyou not tried it on 7 ndevos?
15:10 ndevos coredumb: hmm, I'm not sure why it would fail, it should be possible to set it up....
15:11 coredumb ndevos: i wonder if putting a shared ip on the hosts wouldn't be easier -_-
15:13 doo joined #gluster
15:13 ndevos edong23_: I've tested it briefly, worked for me - there are also some test environments that run on centos-7, so I do not think there are any major issues
15:13 edong23_ cool
15:13 edong23_ well, in a few hours ill have some test results as well
15:14 ndevos coredumb: yes, give all your storage servers a virtual ip, and use rrdns to do some load balancing
15:14 ndevos edong23_: cool :)
15:15 coredumb ndevos: is setting up two nodes without any brick to only share NFS viable ?
15:15 ndevos coredumb: oh, you can do that, just
15:15 edong23_ ndevos: i had a strange issue that made me abandon gluster a while back... but i think i know what it is now
15:16 ndevos .. add those servers to the pool and you should be set
15:16 edong23_ i was doing gluster over rdma (40Gbps) and the volume kept getting corrupted and being inaccessable
15:16 coredumb is there much overhead ?
15:16 edong23_ i said "!@#$ gluster" and moved on
15:17 ndevos coredumb: in that case, the nfs-server (a glusterfs client) will just function like a proxy, I doubt there is much overhead
15:17 edong23_ but later, i discovered that rdma was actually not working periodically on one card (by testing with nfs over rdma) so one of my bricks was disappearing and reappearing randomly
15:17 coredumb ndevos: ok
15:17 edong23_ im delving back into that problem to see if i can solve it (probably need a new card)
15:18 coredumb ndevos: and if i'm not wrong in a two nodes replica setup having more nodes without bricks help with the splitbrain
15:19 plarsen joined #gluster
15:22 ndevos edong23_: rdma support in gluster might not be very stable, it can be more stable to use IP-over-IB instead - but I have not followed the latest progress with IB/rdma
15:22 edong23_ well, im goign to find out
15:22 edong23_ its a minor change from rdma to ipoib
15:22 edong23_ but i want to exploit rdma first
15:22 edong23_ see how it goes
15:23 edong23_ then ill shine my whistle while i pee
15:23 bene2 joined #gluster
15:24 ndevos coredumb: yes, additional glusterd systems can help with the quorum to prevent split -brains
15:24 ndevos `.~.
15:24 ndevos bleah...
15:25 coredumb so it's a win win
15:25 coredumb :)
15:27 coredumb ok sorry being a pain... i peer probed those two "new" nodes
15:27 coredumb even force started volumes on them
15:28 coredumb and i don't have nfs server running on them
15:28 coredumb ....
15:33 ndevos hmm, I would expect to see an nfs process on them, but I havent tried it myself
15:35 coredumb ndevos: ok i'm being blind i forgot to start rpcbind service -_-
15:35 ndevos ah, good
15:38 jobewan joined #gluster
15:40 coredumb ahah
15:40 coredumb failover works flawlessly
15:40 coredumb failback when rebooting the master not so much :P
15:41 coredumb share frozen
15:47 glu joined #gluster
15:48 glu Hello all, can someone please help me with mandatory locking on a volume
15:48 edong23_ coredumb: which are you calling master?
15:48 coredumb edong23_: master of vip
15:49 edong23_ there should be no reason that didnt work
15:49 glu anyone ?
15:49 edong23_ can you post your config?
15:50 edong23_ im assuming keepalived
15:50 glu when i try gluster vol set glu1 features.locks enable it says features.locks does not exist
15:53 coredumb edong23_: yes
15:53 coredumb lemme check something
15:55 hagarth joined #gluster
15:56 kmai007 glu: is that a listed feature on "glusterfs set features"
15:57 kmai007 i see a cluster.eager-lock
15:58 glu hi kmai007, thanks for looking into this for me. I was looking into a solution where two servers can not write to the same file and found the mandatory locks in the gluster website
15:58 glu i am very new to gluster so do not know it in depth
16:00 glu at present if two clients open the same file then both can write to it which is not ideal.
16:02 coredumb edong23_: https://bpaste.net/show/625fdda2d842
16:02 glusterbot Title: show at bpaste (at bpaste.net)
16:03 coredumb so i have dns rr on those two VIP and mount the share with -o tcp,soft,timeo=50
16:03 glu kmai007 have a look at the link http://www.gluster.org/community/documentation/index.php/Translators/features
16:03 glu i am trying to enable the mandatory locking
16:03 coredumb when rebooting one of the nodes, vip moves correctly but the share on the client gets unreachable
16:03 coredumb :(
16:07 coredumb both client and nfs servers are on ESX guests if that matters
16:09 coredumb ls: cannot access /nfs/test/groups/: Input/output error
16:09 coredumb -_-
16:11 sputnik13 joined #gluster
16:11 lalatenduM joined #gluster
16:15 nshaikh joined #gluster
16:15 ghenry joined #gluster
16:15 ghenry joined #gluster
16:19 coredumb edong23_: any idea ?
16:22 coredumb i have to ask infrastructure to put the promisc mode on the vswitch
16:22 coredumb pretty sure this makes it fail
16:24 kmai007 glu i did not have experience with that feature, when you find out let me know.  I think it will be dependent on the version of glusterfs you're using
16:25 edong23_ when you "reboot master" did you check /var/run/vrrpd/vrrp.log?
16:26 edong23_ coredumb: promisc shouldnt matter...
16:28 dtrainor joined #gluster
16:29 coredumb edong23_: all is logged in /var/log/messages
16:30 bala joined #gluster
16:30 coredumb i can see the vip being started on the slave almost instantly and the gratuitous arp being issued
16:38 coredumb edong23_: could that come from my gateway ?
16:39 PeterA joined #gluster
16:45 edong23_ what?
16:46 tdasilva joined #gluster
16:48 Pupeno_ joined #gluster
16:48 anoopcs joined #gluster
16:50 LHinson joined #gluster
16:53 LHinson1 joined #gluster
16:54 JayJ joined #gluster
17:02 elico joined #gluster
17:08 zerick joined #gluster
17:17 wgao joined #gluster
17:25 coredumb edong23_: the gateway of my subnet refusing arp changes or something like that
17:28 bennyturns joined #gluster
17:28 fyxim__ joined #gluster
17:29 atrius joined #gluster
17:33 coredump joined #gluster
17:39 richvdh joined #gluster
17:49 edong23_ coredumb: it could... but im assuming these are on the same network
17:49 edong23_ right?
18:01 coredump|br joined #gluster
18:05 Slashman joined #gluster
18:05 rafi1 joined #gluster
18:07 glusterbot New news from newglusterbugs: [Bug 1142419] tests: regression, can't run `prove $t` in {basic,bugs,encryption,features,performance} subdirs <https://bugzilla.redhat.com/show_bug.cgi?id=1142419>
18:10 nshaikh joined #gluster
18:12 daMaestro joined #gluster
18:12 julim joined #gluster
18:19 coredumb edong23_: right gw shouldn't impact anything
18:19 coredumb vm are not on the same esx host though
18:28 doo_ joined #gluster
18:33 sputnik1_ joined #gluster
18:36 Pupeno joined #gluster
18:40 LHinson joined #gluster
18:41 sputnik13 joined #gluster
18:42 coredump joined #gluster
18:43 sputnik13 joined #gluster
18:50 rafi1 joined #gluster
19:02 kanagaraj joined #gluster
19:03 recidive joined #gluster
19:03 xoritor joined #gluster
19:03 xoritor hey all
19:03 xoritor ok.. what do you think of a 2 node cluster with 4 bricks in each node?
19:04 xoritor maybe expanding to 4 nodes
19:04 xoritor too much room for failure?
19:05 xoritor quad core xeon with gluster + ctdb & samba
19:05 xoritor 32 GB ram and 4 x 4TB HDDs
19:06 xoritor distrepl with replica 2 over all 8 drives
19:06 xoritor 4 x 1 GB NICs
19:09 coredump joined #gluster
19:13 pkoro joined #gluster
19:18 semiosis xoritor: whats it for?
19:18 xoritor mainly files for samba
19:19 semiosis why ctdb?
19:19 xoritor front office people
19:19 xoritor balancing of samba
19:19 xoritor if a node goes down, etc..
19:19 semiosis interesting
19:19 semiosis does that work with the ,,(samba vfs) connector?
19:19 glusterbot I do not know about 'samba vfs', but I do know about these similar topics: 'sambavfs'
19:19 semiosis eh
19:19 semiosis ,,(sambavfs)
19:19 glusterbot http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
19:20 xoritor lol
19:20 xoritor supposed to
19:20 semiosis sweet
19:20 xoritor you give it a VIP and it fails over
19:20 xoritor its just like any other H/A but really easy to setup and made to work with samba/nfs easily
19:20 xoritor low overhead
19:22 xoritor i have had glusterfs running fine on these systsems
19:22 xoritor i have run it with VMs
19:22 xoritor with bd-xlator
19:22 xoritor but never in the config i talked about
19:23 xoritor i only ran it in a 3 node replicated cluster
19:23 xoritor not a 2 node dist repl over 8 bricks
19:24 justyns joined #gluster
19:24 xoritor essentially removing lvm from the mix and giving the whole drive to the brick
19:25 xoritor and making one node a VM only host as they do not really need to be H/A honestly
19:26 xoritor what i need is 100% guarantee that files (data) is there and working
19:26 xoritor while 2 nodes is not ideal i can grow it by 2 more nodes soon
19:26 xoritor so that would be 16 4 TB drives over 4 nodes
19:27 xoritor and is 32 GB of ram enough?  right now i am having a hard time not using swap with that
19:29 semiosis hard to say what is enough.  that's capacity planning & I can't just come up with an answer for you
19:29 xoritor right
19:29 xoritor :-/
19:29 xoritor heh
19:30 semiosis you might want to consider replica 3 & quorum
19:30 semiosis esp if you're going to run samba on your glusterfs servers
19:30 xoritor im thinking to add some nodes and maybe up it
19:31 semiosis the risk with two servers is, if you're running samba on them, and there's a partition that splits the two servers, but samba clients can still reach both servers, then if the same file is written to both servers, it will be split brain
19:31 semiosis if you use quorum this can't happen
19:31 xoritor yea
19:32 semiosis because if one server gets split off it turns read only
19:32 xoritor that is why i use 3 or more usually
19:32 xoritor odd number
19:32 xoritor 3 5 7
19:32 xoritor etc...
19:32 semiosis 9 11 13
19:32 xoritor 13 is a lucky number
19:32 semiosis so is 7
19:33 xoritor especially on friday
19:34 B21956 joined #gluster
19:37 Pupeno joined #gluster
19:43 Pupeno joined #gluster
19:45 glusterbot New news from resolvedglusterbugs: [Bug 1049727] Dist-geo-rep : volume won't be able to stop untill the geo-rep session is deleted. <https://bugzilla.redhat.com/show_bug.cgi?id=1049727> || [Bug 910217] Implement a way for dynamically disabling eager-lock based on number of files opened on the inode <https://bugzilla.redhat.com/show_bug.cgi?id=910217> || [Bug 958118] Provide a way to specify the larger file to be the source
19:47 longshot902 joined #gluster
19:51 elico joined #gluster
19:53 B21956 joined #gluster
19:57 lpabon joined #gluster
20:02 gildub joined #gluster
20:04 tom[] joined #gluster
20:16 zerick joined #gluster
20:16 kkeithley1 joined #gluster
20:18 klaas joined #gluster
20:26 eshy joined #gluster
20:50 Maya_ joined #gluster
20:57 andreask joined #gluster
21:03 Pupeno_ joined #gluster
21:24 Maya_ joined #gluster
21:31 failshel_ joined #gluster
21:34 Pupeno joined #gluster
21:37 glusterbot New news from newglusterbugs: [Bug 1132766] ubuntu ppa: 3.5 missing hooks and files for new geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=1132766>
21:39 Pupeno_ joined #gluster
21:57 Maya_ Hi everyone- I’m experiencing the missing hook-scripts/geo-replication issue outlined here: https://bugzilla.redhat.com/show_bug.cgi?id=1132766 on Ubuntu 12.04. I'm a Gluster newbie, but wanted to know if there was a relatively easy way to fix this without having to wait for a new build?
21:57 glusterbot Bug 1132766: unspecified, unspecified, ---, gluster-bugs, NEW , ubuntu ppa: 3.5 missing hooks and files for new geo-replication
21:59 semiosis Maya_: switch to the new ppa: gluster/glusterfs-3.5
21:59 semiosis pretty sure i uploaded new packages there
21:59 semiosis sorry i haven't widely announced the change of PPA yet
21:59 semiosis @ppa
21:59 glusterbot semiosis: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
22:00 semiosis @forget ppa
22:00 glusterbot semiosis: The operation succeeded.
22:01 semiosis @learn ppa as The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/M9CXF8 -- 3.5 stable: http://goo.gl/6HBwKh -- QEMU with GlusterFS support: http://goo.gl/e8IHnQ (3.4) & http://goo.gl/tIJziO (3.5)
22:01 glusterbot semiosis: The operation succeeded.
22:04 Maya_ @semiosis: I actually spun up some test VMs to test geo-replication and installed 3.5 but still got the "hook-script" not present error. Could I be doing something wrong? I ran the following command: "sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.5" and my version of gluster is accurately reported as being 3.5.2
22:05 semiosis you're using the old PPA
22:05 semiosis which doesnt have the latest packages
22:06 Maya_ Oh, I beg your pardon. I see what you mean. Thanks, I'll give it a go :)
22:06 semiosis good luck.  let me know how it goes :)
22:06 coredump joined #gluster
22:07 calum_ joined #gluster
22:19 tiglog joined #gluster
22:36 plarsen joined #gluster
22:38 tiglog joined #gluster
22:47 Maya_ @semiosis: Fantastic, geo-replication now works after switching the PPA. Thanks for all of your help!
22:47 semiosis so glad to hear it!  you're welcome
22:48 coredump joined #gluster
22:51 foster joined #gluster
22:59 jvdm joined #gluster
23:00 Pupeno joined #gluster
23:12 AaronGr joined #gluster
23:15 hchiramm__ joined #gluster
23:20 ttk joined #gluster
23:28 LHinson joined #gluster
23:44 coredump joined #gluster
23:45 diegows joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary