Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 cjanbanan joined #gluster
00:02 calisto joined #gluster
00:33 msmith_ joined #gluster
00:38 bala joined #gluster
00:56 msmith_ joined #gluster
01:08 haomaiwa_ joined #gluster
01:27 haomaiwa_ joined #gluster
01:38 harish joined #gluster
02:01 haomaiwa_ joined #gluster
02:03 RameshN joined #gluster
02:04 haomai___ joined #gluster
02:14 msmith_ joined #gluster
02:17 suliba joined #gluster
02:22 bala joined #gluster
02:28 rjoseph joined #gluster
02:50 calisto joined #gluster
02:53 sputnik13 joined #gluster
02:56 haomaiwa_ joined #gluster
03:04 kshlm joined #gluster
03:11 haomai___ joined #gluster
03:20 overclk joined #gluster
03:20 haomaiwa_ joined #gluster
03:23 lalatenduM joined #gluster
03:45 nbalachandran joined #gluster
03:47 cjanbanan joined #gluster
03:50 shubhendu joined #gluster
03:57 bharata-rao joined #gluster
04:03 itisravi joined #gluster
04:12 glusterbot New news from newglusterbugs: [Bug 1154491] split-brain reported on files whose change-logs are all zeros <https://bugzilla.redhat.com/show_bug.cgi?id=1154491>
04:32 DV_ joined #gluster
04:40 kdhananjay joined #gluster
04:42 ppai joined #gluster
04:43 ndarshan joined #gluster
04:46 rafi1 joined #gluster
04:46 Rafi_kc joined #gluster
04:51 meghanam joined #gluster
04:51 meghanam_ joined #gluster
04:54 kanagaraj joined #gluster
04:54 deepakcs joined #gluster
04:55 nishanth joined #gluster
04:58 aulait joined #gluster
05:06 atinmu joined #gluster
05:11 prasanth_ joined #gluster
05:17 smohan joined #gluster
05:17 spandit joined #gluster
05:24 karnan joined #gluster
05:25 kshlm joined #gluster
05:26 sputnik13 joined #gluster
05:29 aravindavk joined #gluster
05:30 atinmu joined #gluster
05:32 msmith_ joined #gluster
05:34 sputnik13 joined #gluster
05:37 anoopcs joined #gluster
05:38 hagarth joined #gluster
05:38 dusmant joined #gluster
05:41 kaushal_ joined #gluster
05:51 sputnik13 joined #gluster
05:56 kanagaraj joined #gluster
06:01 cjanbanan joined #gluster
06:06 saurabh joined #gluster
06:10 jiffin joined #gluster
06:10 LebedevRI joined #gluster
06:11 ndarshan joined #gluster
06:11 bala joined #gluster
06:13 soumya joined #gluster
06:15 shubhendu joined #gluster
06:17 aravindavk joined #gluster
06:17 dusmant joined #gluster
06:24 atinmu joined #gluster
06:26 ivok_ joined #gluster
06:34 haomaiwang joined #gluster
06:34 Fen2 joined #gluster
06:35 Fen2 Hi all ! :)
06:36 dusmant joined #gluster
06:40 dusmant joined #gluster
06:43 ricky-ticky1 joined #gluster
06:44 bala joined #gluster
06:48 verdurin joined #gluster
06:53 nishanth joined #gluster
06:54 atalur joined #gluster
06:59 Philambdo joined #gluster
07:01 ctria joined #gluster
07:10 nshaikh joined #gluster
07:14 deepakcs joined #gluster
07:15 Fen2 joined #gluster
07:17 aravindavk joined #gluster
07:21 msmith_ joined #gluster
07:25 masterzen joined #gluster
07:31 kumar joined #gluster
07:33 fsimonce joined #gluster
07:37 kshlm joined #gluster
07:37 kshlm joined #gluster
07:38 cjanbanan joined #gluster
07:44 ivok joined #gluster
07:45 msmith_ joined #gluster
07:48 haomai___ joined #gluster
07:50 cjanbanan joined #gluster
07:57 rgustafs joined #gluster
07:59 anands joined #gluster
08:07 the-me joined #gluster
08:12 liquidat joined #gluster
08:19 andreask joined #gluster
08:21 harish joined #gluster
08:23 Norky joined #gluster
08:27 raghu joined #gluster
08:31 ivok joined #gluster
08:33 spandit_ joined #gluster
08:47 Norky joined #gluster
08:55 rjoseph joined #gluster
08:55 vimal joined #gluster
09:00 ctria joined #gluster
09:04 bala joined #gluster
09:16 dusmant joined #gluster
09:16 DV__ joined #gluster
09:23 bala joined #gluster
09:27 tryggvil joined #gluster
09:34 deepakcs joined #gluster
09:36 dusmant joined #gluster
09:40 cyberbootje joined #gluster
09:43 glusterbot New news from newglusterbugs: [Bug 1154599] Create a document on how "heal" commands work <https://bugzilla.redhat.com/show_bug.cgi?id=1154599>
09:44 aravindavk joined #gluster
09:46 tryggvil joined #gluster
09:48 Slashman joined #gluster
09:49 atinmu joined #gluster
09:53 edward1 joined #gluster
10:00 spandit_ joined #gluster
10:01 aulait joined #gluster
10:06 bala joined #gluster
10:20 kshlm joined #gluster
10:22 kanagaraj joined #gluster
10:29 ctria joined #gluster
10:29 tryggvil joined #gluster
10:30 RaSTar joined #gluster
10:31 RaSTar joined #gluster
10:34 social joined #gluster
10:35 harish joined #gluster
10:39 kanagaraj joined #gluster
10:39 aravindavk joined #gluster
10:40 atinmu joined #gluster
10:40 dusmant joined #gluster
10:41 calisto joined #gluster
10:46 abcd593 joined #gluster
10:52 abcd593 Hi, I'm new to gluster and have just installed it on 2 webservres that are about to deliver a lot of video/audio files (about 16000 files, ca. 1TB in total). 1) What is a good way to test my configuration? with `ab`? 2) making a simple `ls` into the directory takes several seconds (there are about 615 files there) - is it OK? Thank you!
11:07 mojibake joined #gluster
11:14 kaushal_ joined #gluster
11:18 ppai joined #gluster
11:22 Ark joined #gluster
11:22 tryggvil joined #gluster
11:23 kanagaraj joined #gluster
11:23 harish joined #gluster
11:23 dusmant joined #gluster
11:29 ppai joined #gluster
11:30 virusuy joined #gluster
11:35 diegows joined #gluster
11:44 glusterbot New news from newglusterbugs: [Bug 1154635] glusterd: Gluster rebalance status returns failure <https://bugzilla.redhat.com/show_bug.cgi?id=1154635>
11:47 ekuric joined #gluster
11:47 soumya_ joined #gluster
11:49 meghanam joined #gluster
11:49 meghanam_ joined #gluster
11:50 nishanth joined #gluster
11:57 abcd593 left #gluster
12:04 anands joined #gluster
12:05 Fen1 joined #gluster
12:09 davidhadas joined #gluster
12:11 R0ok_|mkononi joined #gluster
12:13 R0ok_|mkononi joined #gluster
12:15 brycelane left #gluster
12:18 R0ok_|mkononi joined #gluster
12:18 ctria joined #gluster
12:19 _dist joined #gluster
12:26 R0ok_|mkononi joined #gluster
12:27 dguettes joined #gluster
12:30 dusmant joined #gluster
12:32 AdrianH joined #gluster
12:34 AdrianH Hello, I have a question: I have a Gluster system up and running, distributed and replicated. One of our servers is only reading from it and I want to improve its performance. I have Gluster mounted as "glusterfs". Does the read time improve if I mount it as NFS? Thanks for reading....
12:35 AdrianH Or as another FS?
12:36 AdrianH My bricks are XFS by the way...
12:37 _dist AdrianH: it depends, NFS is typically faster for many small files than glusterfs fuse. Depending on what you're doing with the mount you might consider a vfs mount instead.
12:41 calisto joined #gluster
12:43 AdrianH I am using it for an IIPimage server so it is reading jpeg2000 and serving tiles...
12:45 _dist AdrianH: sorry I don't know much about that. I'm sure someone else will be able to help more than myself, but I think it might be early in the day for this channel :)
12:48 R0ok_|mkononi joined #gluster
12:50 kkeithley serving "tiles" sounds like lots of small files. Not exactly Gluster's strong suit.  Mounting the gluster volume using NFS instead of native/fuse will probably be better because the NFS client caches more agressively.  As for XFS vs other file systems for the bricks, you don't have too many choices. Red Hat recommends (or requires) XFS in RHS. You can use any file system that supports extended attributes.
12:53 R0ok_|mkononi joined #gluster
13:01 R0ok_|mkononi joined #gluster
13:03 rwheeler joined #gluster
13:06 AdrianH kkeithley & _dist : ok thanks :)
13:06 R0ok_|mkononi joined #gluster
13:07 rgustafs joined #gluster
13:07 tdasilva joined #gluster
13:08 julim joined #gluster
13:11 marcoceppi joined #gluster
13:11 marcoceppi joined #gluster
13:12 theron joined #gluster
13:12 ekuric joined #gluster
13:12 UnwashedMeme joined #gluster
13:15 dusmant joined #gluster
13:16 rolfb joined #gluster
13:19 _dist AdrianH: as for FS I've used glusterfs on ext4, xfs & zfs. However if XFS meets your needs there is no reason to stray from it since RH uses it as their test base
13:21 AdrianH _dist : ok for the bricks. I am mounting the gluster volume with NFS and testing...
13:21 ctria joined #gluster
13:24 theron_ joined #gluster
13:24 harish joined #gluster
13:28 B21956 joined #gluster
13:31 calisto joined #gluster
13:31 kkeithley joined #gluster
13:37 plarsen joined #gluster
13:38 glusterbot New news from resolvedglusterbugs: [Bug 958781] KVM guest I/O errors with xfs backed gluster volumes <https://bugzilla.redhat.com/show_bug.cgi?id=958781>
13:44 dusmant joined #gluster
13:50 _dist joined #gluster
13:52 theron joined #gluster
13:56 bala joined #gluster
14:01 giannello joined #gluster
14:08 plarsen joined #gluster
14:08 mojibake joined #gluster
14:09 glusterbot New news from resolvedglusterbugs: [Bug 905933] GlusterFS 3.3.1: NFS Too many levels of symbolic links/duplicate cookie <https://bugzilla.redhat.com/show_bug.cgi?id=905933>
14:10 jbautista- joined #gluster
14:11 R0ok_|mkononi joined #gluster
14:11 Guest35459 joined #gluster
14:16 theron joined #gluster
14:17 theron_ joined #gluster
14:21 msmith_ joined #gluster
14:25 coredump joined #gluster
14:27 andreask joined #gluster
14:27 bchilds joined #gluster
14:28 bchilds where can i find more info on sharding translator?
14:40 Guest35459 joined #gluster
14:41 Philambdo joined #gluster
14:41 bennyturns joined #gluster
14:44 Arrfab already asked here, but let's try again : I have a gluster setup on 4 nodes, each exposing a brick to form a distributed-replicated vol. while perf seems ok at the disk/xfs level (so directly on each node), it's almost divided by 3 if the same operation is done through gluster. any pointer ?
14:45 R0ok_|mkononi joined #gluster
14:47 ramon_dl joined #gluster
14:50 ramon_dl Arrfab: explain, please, little bit more about net setup (1 or 10Gbps) and wich client fuse, nfs. Also are you using mostly small files?
14:52 lmickh joined #gluster
14:52 R0ok_|mkononi joined #gluster
14:52 Arrfab ramon_dl: each node has two 1Gb ethernet, and gluster/storage is using second nic. client is either fuse (local backup with rsync) or libgfapi (as those nodes are also kvm hypervisors) and so a mix of small or big files (for qcow2 images, on the volume holding all the kvm/qemu images files)
14:52 anands joined #gluster
14:54 ramon_dl Arrfab: using proxmox, maybe?
14:54 Arrfab ramon_dl: no, opennebula, but it's slow for everything, so not even only for opennebula/kvm VMs
14:57 R0ok_|mkononi joined #gluster
14:57 ndevos Arrfab: not sure if you looked into this already, but maybe it helps: http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt#Tuning_the_volume_for_virt-store
14:58 Arrfab ndevos: yes, already done/applied for the volume used for virt-store
14:58 ramon_dl Arrfab: fuse/libgfapi in a replicated volume needs to send twice every block of a file. Gluster adds a lot of latency in every metadata operation. For large blocks (>128KB) may be latency isn't noticeable, but send blocks twice cuts bandwidth to half: you can expect at most 55/60MBps for a 1GbE
14:58 Arrfab but perf are awful with a simple setup
14:58 bene joined #gluster
14:59 Arrfab ramon_dl: hmm, 55/60MBps is what I got at the moment
14:59 ramon_dl Arrfab: NO, sorry. I'll take a look at it. thanks!
15:00 ramon_dl Arrfab: that's max speed for a replicated volume (2 replicas) with fuse/libgapi)
15:00 haomaiwa_ joined #gluster
15:01 Arrfab ramon_dl: distributed/replicated
15:01 _dist I get about 300 megabytes /sec on a 3-way replica on libgfapi (sequential write test inside of VM) over a 10gbe backend
15:02 _dist but, that's while other VMs are running, it was a bit higher when testing on a blank volume
15:02 Arrfab ramon_dl: hmm, and if I switch to distributed instead of distributed-replicated
15:02 Arrfab ?
15:03 ramon_dl Arrfab: Yes, of course, distrib/replicated. Remember every file is replicated but it's keeped only in a pair of bricks
15:03 ramon_dl Arrfab: without replica you can reach 100/110MBps but you loose redundancy
15:04 rwheeler joined #gluster
15:04 Arrfab ramon_dl: yeah, I know ... but so there is no way to get higher than ~55/60MBps with distributed/replicated on a 1Gb ethernet connection then ? let me try with just a distributed one
15:05 ramon_dl Arrfab: Maybe you can try NFS client?
15:07 Arrfab ramon_dl: I can for my backup vol, but not for my virt-store as it will use libgfapi
15:07 calisto joined #gluster
15:07 ramon_dl Arrfab: gluster NFS server is responsible to do replicas, that means your client send data once through one interface 1Gbps and NFS server sends replica trough the other one
15:08 ramon_dl Arrfab: with gluster nfs server you loose automatic failover with client.
15:09 glusterbot New news from resolvedglusterbugs: [Bug 1139999] Rename of a file from 2 clients racing and resulting in an error on both clients <https://bugzilla.redhat.com/show_bug.cgi?id=1139999>
15:10 Arrfab ramon_dl: so it's either replication and slow IO or distributed and losing one node makes the whole volume inconsistent
15:10 ramon_dl _dist: Reaching 300MBps on replica 3 volume over 10GbE for fuse client is maximum speed: 10GbE -->1GBps /3 (replicas)=333MBps
15:10 Arrfab ramon_dl: also, now I understand that everything happens at the client side ? so a gluster client has to send twice the data, and it's not replicated *between* gluster nodes in the background ?
15:12 haomaiwang joined #gluster
15:13 ramon_dl Arrfab: that's right! In these days AFR (replicated volume) is done in client side except for nfs gluster mounts. New Style Replication or NSR is in the way, and is server side based!
15:14 Arrfab ramon_dl: ah, then it can explain the issue
15:15 Arrfab ramon_dl: accessing it through nfs would "solve" that issue ?
15:15 ramon_dl Arrfab: In the background, on server side, is self-heal daemon working when needed, but that's asyncronous
15:16 _dist ramon_dl: yeap, not complaining, just giving perspective to anyone who might be making a setup :)
15:16 eryc joined #gluster
15:16 eryc joined #gluster
15:16 _dist ramon_dl: but, we aren't using fuse, we are using libgfapi
15:17 ramon_dl Arrfab: If you use one port to reach one node gluster nfs server and another port for intra gluster comm's, I think YES!
15:17 Fen1 _dist: how can you use libgfapi ? What is the command ? and what is the deifference with fuse ?
15:18 ramon_dl Arrfab: I know, is difficult to realize in the firts moments...
15:18 neofob joined #gluster
15:19 ramon_dl Arrfab: libgfapi is a client without fuse overhead. Replica, and sending data twice, is client's responsability.
15:19 XpineX joined #gluster
15:20 ramon_dl _dist: My two last answers where for you! Sorry!
15:20 ramon_dl Arrfab: Sorry too!
15:22 eryc joined #gluster
15:22 ramon_dl And I apologize every body for my poor english!
15:22 ndevos ramon_dl++ nicely explained!
15:22 glusterbot ndevos: ramon_dl's karma is now 1
15:23 ramon_dl ndevos: thank you!
15:24 Arrfab yeah, thank you ramon_dl : really appreciated
15:25 R0ok_|mkononi Interesting read on NSR & AFR: http://blog.gluster.org/2014/04/new-style-replication/
15:25 glusterbot Title: New Style Replication | Gluster Community Website (at blog.gluster.org)
15:26 Fen1 ramon_dl : is there a command to you libgfapi ?
15:27 eryc joined #gluster
15:27 eryc joined #gluster
15:27 semiosis Fen1: applications have to be programmed to use libgfapi, and compiled with the library
15:27 Fen1 semiosis: thx  :)
15:27 semiosis yw
15:28 ramon_dl Fen1: As its name says, libgfapi is a library can use any app or service who are enabled to use it
15:28 bala joined #gluster
15:31 ramon_dl Arrfab: your'e welcome!
15:33 ramon_dl semiosis: better answer, thank you!
15:33 eryc joined #gluster
15:40 charta joined #gluster
15:44 rwheeler joined #gluster
15:45 Arrfab ramon_dl: just reconfigured a test volume from distributed+replicated to just distributed and I have now 116 MB/s (with dd using bs=128k)
15:45 glusterbot New news from newglusterbugs: [Bug 1151397] DHT: Rebalance process crash after add-brick and `rebalance start' operation <https://bugzilla.redhat.com/show_bug.cgi?id=1151397> || [Bug 1151308] data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back <https://bugzilla.redhat.com/show_bug.cgi?id=1151308>
15:46 ramon_dl Arrfab: that's max speed! congratulations!
15:48 Arrfab ramon_dl: yeah but no replication ... and also I now have to be sure that libgfapi also is aligned on 128k block otherwise perf decreases if using slower block
15:49 ramon_dl Arrfab: may be you can jump to 10GbE...
15:50 Arrfab ramon_dl: not on those machines ... but anyway, I now need to be sure that qemu-img/libgfapi will use larger block size too
15:54 ramon_dl Arrfab: I can't help on qemu issues, sorry. May be you can try to use nfs from qemu? NFS is very convenient for small files/small block access because aggressive caching it does but I don't if qemu forces sync's. If this is the case nfs is useless...
15:55 Arrfab ramon_dl: yeah, doing now some dd tests and the smaller the bs, the smaller the speed
15:55 ramon_dl ...I don't know if...
15:58 jobewan joined #gluster
16:04 calisto joined #gluster
16:08 hagarth joined #gluster
16:15 meghanam joined #gluster
16:15 meghanam_ joined #gluster
16:16 haomai___ joined #gluster
16:23 msmith_ joined #gluster
16:24 davidhadas_ joined #gluster
16:26 eryc joined #gluster
16:26 eryc joined #gluster
16:49 eryc joined #gluster
16:56 eryc joined #gluster
17:01 sputnik13 joined #gluster
17:01 theron joined #gluster
17:02 sputnik13 joined #gluster
17:08 quique joined #gluster
17:08 _dist Arrfab: I find on any storage/fs the smaller the bs (to a point) the lower the iops and throughput
17:11 R0ok_|mkononi joined #gluster
17:14 davidhadas joined #gluster
17:15 quique joined #gluster
17:17 tdasilva joined #gluster
17:17 PeterA joined #gluster
17:23 davidhadas_ joined #gluster
17:27 bennyturns joined #gluster
17:27 kumar joined #gluster
17:29 mattblaha joined #gluster
17:31 julim joined #gluster
17:32 marcoceppi_ joined #gluster
17:34 mattblaha hey all, I'm troubleshooting some intermittent problems with georep on 3.4.5, CentOS 6. I notice the system only has a version of rsync 3.0.6, can anyone tell me if I need to upgrade, and if so, tell me where you've gotten packages from that work well?
17:38 zerick joined #gluster
17:40 JustinClift mattblaha: Pretty sure 3.0.6 on CentOS is fine (even though there's a doc somewhere saying 3.0.7 min).
17:41 JustinClift mattblaha: As a thought, there aren't many ppl on IRC channel at the moment.  Might be better to ask on gluster-users mailing list to get a knowlegable answer from someone more clueful (I'm not that technical). ;)
17:41 dtrainor joined #gluster
17:42 mattblaha thanks JustlinClift, I sort of figure it was, so far as I can tell Red Hat uses the same package for Storage Server
17:42 mattblaha and I may just go there, we're seeing lots of unsynced files, new files sync fine, updated files regularly require trashing and rebuilding the indexes
17:48 MacWinner joined #gluster
17:49 longshot902 joined #gluster
17:49 charta joined #gluster
17:54 ira joined #gluster
17:55 msmith_ joined #gluster
17:57 msmith_ joined #gluster
18:00 JustinClift mattblaha: RH Storage Server _is_ a more hardened version of Gluster (without some of the more bleeding edge code too ;>).  So, if you've got the $$$ for it, by all means.
18:00 JustinClift mattblaha: Helps pay my salary too. :D
18:07 mattblaha JustinClift, we have considered going to it a few times, with the exception of georep, the open source version has been really solid, for the most part georep is really solid
18:16 B21956 joined #gluster
18:20 JustinClift mattblaha: Cool. :)
18:20 JustinClift mattblaha: Hmmm, you guys want to be a GlusterFS Case Study?
18:20 JustinClift We're looking for places to do some promo with. :
18:22 cfeller joined #gluster
18:23 mattblaha JustinClift, quite possibly, we've done that with some other open source projects recently that have worked out well
18:25 cfeller joined #gluster
18:28 daxatlas joined #gluster
18:31 theron joined #gluster
18:31 hawksfan joined #gluster
18:33 failshell joined #gluster
18:33 kkeithley RHEL 6.6 still has rsync-3.0.6.  RHS-3.0, (just released a couple weeks ago) is built on top of RHEL 6.5 has rsync-3.0.6.  Off hand I'd say rsync-3.0.6 should be fine.
18:33 hawksfan i removed a brick from a distributed volume because i thought it was toast
18:34 hawksfan turns out, we were able to recover the majority of the brick
18:34 hawksfan how do i re-add the brick?
18:34 ira joined #gluster
18:35 hawksfan when i use add-brick, i get this error:
18:35 hawksfan volume add-brick: failed: Staging failed on csricdca05. Error: /data/brick-dcarchive is already part of a volume
18:43 JoeJulian @path or prefix
18:43 glusterbot JoeJulian: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
18:44 JoeJulian hawksfan: ^
18:44 JoeJulian (Go hawks!)
18:53 quique i have 4 dual-homed servers server1-4, server1-3 are on the same lan and on a vpn, server4 is on the vpn in a different location.  when I probe server4 from server1 it gets the info for the other servers with lan address which is useless cause it can't connect, it needs to use the vpn addresses, what's the best way to do that?
18:53 JoeJulian @lucky split horizon dns
18:53 glusterbot JoeJulian: http://en.wikipedia.org/wiki/Split-horizon_DNS
18:54 anands joined #gluster
18:54 JoeJulian and, of course, that means you should also be using ,,(hostnames)
18:54 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
18:54 andreask joined #gluster
18:55 quique JoeJulian: i am using hostnames
18:55 JoeJulian excellent. Should be fairly easy then.
18:56 quique if i probe it get this: peer probe: success. Host server1-vpn5port 24007 already in peer list
18:57 quique and the hostname is not changed
18:57 quique it continues to say: State: Peer in Cluster (Disconnected)
18:59 B21956 joined #gluster
18:59 JoeJulian that's probably not what you want anyway unless you want 1-4 to use the vpn addresses too. Using split-horizon dns, you would have "server1". On your lan, that would resolve to (e.g.) "10.0.0.1". On your remote server it might be "10.1.0.1" where 10.1 is the vpn address. Same hostname, two different ip resolutions.
19:00 JoeJulian That's split horizon dns, and it can be done simply by using /etc/host files.
19:01 JoeJulian quique: Taking this a step back, what I expect you're trying to do, based on your description, is replica 4 with one remote server. That's (likely) going to be aweful.
19:03 quique JoeJulian: yes that is what i am trying to do, why is that going to be aweful?
19:03 JoeJulian Each write will have to go to all 4 servers and won't be complete until it's been acknowledged by all 4. Further, any open or stat is going to check to make sure all the replicas are in sync before answering. That means waiting for the slowest connection.
19:03 quique hmm, ok what would be a better choice?
19:04 JoeJulian For remote replicas it's usually best to use geo-replication
19:04 theron joined #gluster
19:04 JoeJulian but that's unidirectional, if you can make that work for your needs.
19:04 SpeeR joined #gluster
19:05 quique so things changed on the remote replica can't be updated on the others?
19:06 quique it's read-only on the remote replica?
19:06 JoeJulian @davemc What's the best way to communicate with you? Email is, apparently, not it. (cc @johnmark)
19:06 JoeJulian quique: It's not read-only, but it's not going to sync changes made on the remote back to the master. Only master->remote
19:06 semiosis JoeJulian: whats with the twitter handles?  you sound like an IRC n00b
19:07 JoeJulian hehe
19:07 JoeJulian I've been doing twitter and hipchat a lot this morning.
19:07 semiosis ahh
19:09 bfoster joined #gluster
19:14 SpeeR a few days ago, I started receiving this error : [2014-10-18 17:20:51.526425] I [afr-self-heald.c:1687:afr_dir_exclusive_crawl] 0-glusterHA-replicate-1: Another crawl is in progress for glusterHA-client-3
19:14 LebedevRI joined #gluster
19:14 SpeeR I've tried self healing, when that didn't work, I removed this file that was alway in the volume heal info output <gfid:5274bb3f-aae3-4a79-9387-fb2a3a7838c3>
19:14 tdasilva joined #gluster
19:14 SpeeR from one of the bricks
19:15 SpeeR anything else I can try to get it healing properly
19:15 quique JoeJulian: this is no gluster api for managing volumes and peers right?
19:15 JoeJulian Just the CLI, right.
19:17 JoeJulian of course, the cli could be considered an api, and is used as such for some applications like oVirt for instance.
19:22 kkeithley We've talked about a RESTful API for the CLI. We might get it in 3.7.  The gluster->glusterd transport is RPC and uses the xlator stack, and as such is programmable; even if it's not very easy.
19:26 _dist joined #gluster
19:26 PeterA1 joined #gluster
19:47 theron joined #gluster
19:48 davemc JoeJulian, mea culpa.
19:48 davemc will find and reply to email. The new side of me is trying to get info on how to get you in
19:49 theron joined #gluster
19:49 JoeJulian davemc: no worries. Some people work better using specific communication avenues. Just trying to figure you out. :D
19:51 B21956 joined #gluster
19:52 davemc email is best, JoeJulian but tends to get buried in the onslaught of stuff I get these days
19:52 JoeJulian Figured as much
19:52 doekia joined #gluster
19:52 davemc might be worth noting my gmail, davemc
19:59 _dist joined #gluster
20:06 ctria joined #gluster
20:12 rshott joined #gluster
20:19 n-st joined #gluster
20:27 nshaikh joined #gluster
20:51 Pupeno joined #gluster
21:04 allgood joined #gluster
21:05 allgood hi folks
21:05 allgood i have a two node setup and one file on it is out of sync in each node
21:06 allgood i am ok on choosing any of the two, but I do not know how.
21:06 allgood it is sufficient to remove one file direct from one node?
21:12 nshaikh joined #gluster
21:13 tryggvil joined #gluster
21:15 JustinClift allgood: There aren't many people on IRC channel atm.  Might be better to ask on gluster-users mailing list.
21:16 allgood JustinClift: i am trying to circumvent the problem
21:16 JustinClift np
21:16 allgood copying the file from one node to the cluster with another name
21:16 * JustinClift isn't technically strong enough to assist :(
21:16 allgood no problem JustinClift
21:16 allgood thank you
21:17 JustinClift :)
21:18 allgood i was able to use the same name
21:18 allgood the file was giving read error
21:18 allgood but i could remove it
21:22 failshel_ joined #gluster
21:29 semiosis allgood: that's probably a split brain.  your client log will say for sure.  see ,,(split brain) for a solution
21:29 glusterbot allgood: I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
21:30 semiosis ehhh
21:30 semiosis @alias split-brain as split brain
21:30 glusterbot semiosis: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
21:30 semiosis aghahghhg
21:30 semiosis allgood: ,,(split-brain)
21:30 glusterbot allgood: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
21:30 semiosis @alias split-brain split brain
21:30 glusterbot semiosis: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
21:30 semiosis oh come on
21:30 semiosis @alias split-brain 'split brain'
21:30 glusterbot semiosis: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
21:31 allgood there was only one file that was out of sync
21:31 allgood already fixed
21:31 allgood thank you guys
21:31 semiosis allgood: one file can be split-brained
21:31 semiosis glad you sorted it out
21:33 allgood i don't know how it got split-brained, i am pretty sure only one node was changing it
21:37 allgood left #gluster
21:40 badone joined #gluster
22:05 calum_ joined #gluster
22:13 theron joined #gluster
22:28 badone joined #gluster
22:32 failshell joined #gluster
22:41 badone joined #gluster
22:54 JoeJulian @alias split-brain "split brain"
22:54 glusterbot JoeJulian: Error: This key has more than one factoid associated with it, but you have not provided a number.
22:54 JoeJulian @alias split-brain 1 "split brain"
22:54 glusterbot JoeJulian: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
22:54 JoeJulian @alias split-brain "split brain" 1
22:54 JoeJulian @alias split-brain "split brain" 2
22:54 JoeJulian @split brain
22:54 glusterbot JoeJulian: The operation succeeded.
22:55 glusterbot JoeJulian: An error has occurred and has been logged. Check the logs for more informations.
22:55 glusterbot JoeJulian: To heal split-brain, use splitmount. http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
22:55 JoeJulian @alias split-brain "split brain" 2
22:55 glusterbot JoeJulian: An error has occurred and has been logged. Check the logs for more informations.
22:55 * JoeJulian gives up too
23:06 David_H_Smith joined #gluster
23:06 anands joined #gluster
23:17 tryggvil joined #gluster
23:45 cjanbanan joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary