Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 LHinson joined #gluster
00:09 cliluw joined #gluster
00:12 cliluw What is Gluster licensed under?
00:24 bala joined #gluster
00:24 JoeJulian cliluw: https://forge.gluster.org/gluste​rfs-core/glusterfs/trees/master
00:24 glusterbot Title: Tree for glusterfs in GlusterFS Core - Gluster Community Forge (at forge.gluster.org)
00:24 JoeJulian GPLV2 or LGPLV3
00:25 cliluw JoeJulian: Oh, that's strange. On the website, it says it's licensed under the AGPL.
00:25 JoeJulian The web site is.
00:25 JoeJulian Oh
00:25 JoeJulian what?
00:25 cliluw http://www.gluster.org/documentation/comm​unity/GNU_Affero_General_Public_License/
00:25 JoeJulian I'll email Eco and see what the heck he's got going on there.
00:25 glusterbot Title: Gluster (at www.gluster.org)
00:26 JoeJulian It was a long time ago.
00:26 cliluw It says there that "GLUSTER, Inc. provides both the Client Software and Server Software to Client under version 3 of the GNU Affero General Public License."
00:27 JoeJulian email sent...
00:27 cliluw JoeJulian: Thanks. I really want to use GlusterFS but if it's licensed under the AGPL, it's a no-go.
00:28 JoeJulian No, hasn't been AGPL since 3.0, iirc.
00:28 JoeJulian Red Hat's allergic to AGPL also I believe.
01:21 Slashman joined #gluster
01:31 itisravi joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:53 diegows joined #gluster
01:53 Slashman joined #gluster
01:53 ttk joined #gluster
01:53 klaas joined #gluster
01:53 ghenry joined #gluster
01:53 R0ok_ joined #gluster
01:53 side_control joined #gluster
01:54 Freman joined #gluster
01:54 ackjewt joined #gluster
01:54 frankS2 joined #gluster
01:54 gothos joined #gluster
01:54 lezo__ joined #gluster
01:54 dastar joined #gluster
01:54 samsaffron___ joined #gluster
01:54 Alex joined #gluster
01:54 morse joined #gluster
01:54 siXy joined #gluster
01:54 verdurin joined #gluster
01:54 Rydekull joined #gluster
01:54 pdrakewe_ joined #gluster
01:54 ws2k3 joined #gluster
01:54 ron-slc joined #gluster
01:54 skippy joined #gluster
01:54 and` joined #gluster
01:54 schrodinger joined #gluster
01:54 msp3k joined #gluster
01:54 Debolaz joined #gluster
01:54 siel joined #gluster
01:54 twx joined #gluster
01:54 jezier joined #gluster
01:54 masterzen joined #gluster
01:54 Andreas-IPO joined #gluster
01:57 guntha_ joined #gluster
01:59 johnmwilliams__ joined #gluster
02:09 harish joined #gluster
02:19 bharata-rao joined #gluster
02:19 gildub joined #gluster
02:19 nishanth joined #gluster
02:48 vu joined #gluster
03:08 kdhananjay joined #gluster
03:19 haomaiwang joined #gluster
03:32 nbalachandran joined #gluster
03:33 kdhananjay joined #gluster
03:34 haomaiw__ joined #gluster
03:41 hagarth joined #gluster
03:41 haomaiwa_ joined #gluster
03:43 nbalachandran joined #gluster
03:56 ndarshan joined #gluster
03:57 haomai___ joined #gluster
03:58 itisravi joined #gluster
04:01 kshlm joined #gluster
04:01 kanagaraj joined #gluster
04:03 spandit joined #gluster
04:04 shubhendu joined #gluster
04:13 kdhananjay joined #gluster
04:21 atinmu joined #gluster
04:21 nbalachandran joined #gluster
04:34 overclk joined #gluster
04:40 rafi1 joined #gluster
04:40 Rafi_kc joined #gluster
04:41 jiffin joined #gluster
04:41 anoopcs joined #gluster
04:49 ramteid joined #gluster
04:55 rejy joined #gluster
05:02 azar joined #gluster
05:02 azar Hi everyone, I need some help about glusterfs source code. I can not understand the variable "first_free" in "_fdtable structure". what is the use of it?
05:08 karnan joined #gluster
05:10 dtrainor joined #gluster
05:11 bala joined #gluster
05:12 KORG joined #gluster
05:13 overclk_ joined #gluster
05:15 atalur joined #gluster
05:16 andreask joined #gluster
05:19 JayJ joined #gluster
05:19 RameshN joined #gluster
05:21 prasanth_ joined #gluster
05:24 Humble joined #gluster
05:25 lalatenduM joined #gluster
05:32 haomaiwa_ joined #gluster
05:33 haomaiw__ joined #gluster
05:35 kdhananjay joined #gluster
05:36 soumya joined #gluster
05:36 aravindavk joined #gluster
05:38 ppai joined #gluster
05:50 Philambdo joined #gluster
05:51 meghanam joined #gluster
05:51 meghanam_ joined #gluster
06:00 dusmant joined #gluster
06:08 RaSTar joined #gluster
06:11 overclk joined #gluster
06:16 ekuric joined #gluster
06:16 shubhendu joined #gluster
06:22 dusmant joined #gluster
06:27 pkoro joined #gluster
06:28 nshaikh joined #gluster
06:30 LebedevRI joined #gluster
06:32 dusmant joined #gluster
06:32 ricky-ti1 joined #gluster
06:39 raghu joined #gluster
06:49 coredumb edong23_: ok so i did a few more tests...
06:49 coredumb for an HTTP service i get the failover instantly
06:49 coredumb not losing one ping
06:49 glusterbot coredumb: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
06:50 coredumb losing 1 or 2 on fallback when master of VIP reboot
06:50 coredumb s
06:51 coredumb but the nfs client share is unavailable right when failover happens and returns input/output error
06:51 coredumb no other solution than unmounting/remounting it
07:06 shubhendu joined #gluster
07:09 coredumb edong23_: oh yeah i get the share back alive after 3.30/4mn
07:12 dusmant joined #gluster
07:15 coredumb only when fallingback to master -_-
07:17 coredumb 6mn20 on failover actually
07:19 MickaTri joined #gluster
07:19 coredumb dunno which client side options could help :/
07:22 atinmu joined #gluster
07:22 aravindavk joined #gluster
07:26 spandit joined #gluster
07:29 spandit joined #gluster
07:32 anoopcs joined #gluster
07:40 MickaTri joined #gluster
07:42 MickaTri left #gluster
07:42 MickaTri2 joined #gluster
07:43 sputnik13 joined #gluster
07:44 soumya joined #gluster
07:45 lalatenduM joined #gluster
07:46 FenTri joined #gluster
07:50 social joined #gluster
07:58 soumya joined #gluster
08:02 liquidat joined #gluster
08:07 soumya joined #gluster
08:26 aravindavk joined #gluster
08:28 glu joined #gluster
08:29 glu Morning all can any one hele me with the Mandatory locking in Glusterfs
08:29 saurabh joined #gluster
08:29 Fen1 joined #gluster
08:29 Fen1 left #gluster
08:30 glu Anyone :(
08:31 Fen1 joined #gluster
08:32 rafi1 joined #gluster
08:32 harish joined #gluster
08:33 Fen1 Hi !
08:33 pkoro joined #gluster
08:39 rgustafs joined #gluster
08:42 pkoro_ joined #gluster
08:47 aravindavk joined #gluster
08:55 DV__ joined #gluster
08:57 hagarth joined #gluster
09:09 kumar joined #gluster
09:18 glusterbot New news from resolvedglusterbugs: [Bug 1142705] [RFE]Option to specify a keyfile needed for Geo-replication create push-pem command. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1142705>
09:18 dusmant joined #gluster
09:19 vimal joined #gluster
09:19 pkoro_ joined #gluster
09:19 ndarshan joined #gluster
09:21 saurabh joined #gluster
09:22 nishanth joined #gluster
09:29 spandit joined #gluster
09:31 azar left #gluster
09:45 ProT-0-TypE joined #gluster
09:46 ndarshan joined #gluster
09:46 nishanth joined #gluster
09:48 nbalachandran joined #gluster
09:50 Fen1 Does Glusterfs support IPv6 ?
09:53 dusmant joined #gluster
10:06 ricky-ticky1 joined #gluster
10:06 kshlm joined #gluster
10:07 atinm joined #gluster
10:14 sac`away joined #gluster
10:14 pkoro joined #gluster
10:40 spandit joined #gluster
10:44 dusmant joined #gluster
10:57 giannello joined #gluster
11:11 ricky-ti1 joined #gluster
11:13 spandit joined #gluster
11:13 pkoro_ joined #gluster
11:16 kshlm joined #gluster
11:17 bfoster joined #gluster
11:19 Pupeno joined #gluster
11:19 hagarth joined #gluster
11:19 pkoro__ joined #gluster
11:21 julim joined #gluster
11:25 B21956 joined #gluster
11:26 recidive joined #gluster
11:28 atinm joined #gluster
11:31 ws2k3 what is the current status of glusterfs with freebsd support ?
11:32 edwardm61 joined #gluster
11:37 mojibake joined #gluster
11:42 mojibake joined #gluster
11:56 Fen1 glusterbot
12:01 JustinClift *** Weekly GlusterFS Community Meeting is starting now in #gluster-meeting on irc.freenode.net ***
12:03 edward1 joined #gluster
12:03 hagarth joined #gluster
12:03 Slashman joined #gluster
12:07 bene2 joined #gluster
12:11 spandit joined #gluster
12:15 itisravi_ joined #gluster
12:15 jdarcy joined #gluster
12:19 pkoro__ joined #gluster
12:22 LebedevRI joined #gluster
12:22 atalur joined #gluster
12:23 LHinson joined #gluster
12:23 coredumb edong23_: for the record, testing with nfs-ganesha gives me way better results
12:23 coredumb i lose the share for 4s when failover and 6s on fallback
12:24 coredumb could that be a bug in glusterfs 3.5.2 ?
12:24 chirino joined #gluster
12:25 LHinson1 joined #gluster
12:28 diegows joined #gluster
12:30 kanagaraj joined #gluster
12:32 julim joined #gluster
12:34 gothos coredumb: do you have any performance camparision compared with fuse mount? I would be really interested in that
12:35 coredumb gothos: for now i'd like to make the failover transparent to the client
12:36 coredumb which is definitely not the case
12:36 coredumb :/
12:37 gothos coredumb: yeah, that's also something we would need here,which is why we went with fuse
12:37 coredumb gothos: also i have weird behaviour when listing directories where randomly it gives me i/o errors
12:37 coredumb gothos: i'll use the native client for everything EL6 and up
12:38 coredumb i'm driving myself mad with this redundant NFS for crappy old EL4/AIX
12:38 coredumb :(
12:39 _Bryan_ joined #gluster
12:40 siXy glusterfs uses TCP NFS, which is sadly pretty fundamentally incompatible with fast failover.
12:40 gothos most of our stuff is EL6/7 by now luckily, but we have problem with the fuse client and small files lots of very long stalls
12:40 siXy repeated requests to allow UDP NFS have been ignored.
12:41 coredumb siXy: isn't there any client setting one could set to speedup the switch ?
12:42 siXy not really, it's a kernel thing.
12:42 coredumb yeah that's what i thought
12:55 LHinson joined #gluster
13:00 bennyturns joined #gluster
13:03 asku left #gluster
13:04 dusmant joined #gluster
13:10 toti joined #gluster
13:13 ninkotech_ joined #gluster
13:21 hagarth joined #gluster
13:23 tdasilva joined #gluster
13:24 bene2 joined #gluster
13:24 ninkotech__ joined #gluster
13:30 kodapa joined #gluster
13:38 ninkotech__ joined #gluster
13:48 plarsen joined #gluster
13:50 coredumb for some reasons i had one node of my replica which lost its gluster partitions and wrote on /
13:50 coredumb i remove what was written, remounted the partitions
13:51 coredumb now gluster volume heal <vol> info gives me a _very_ long list of file ids from the node that didn't have issues
13:52 coredumb and 0 from this node
13:52 siXy good luck :(
13:52 coredumb is that normal ?
13:52 coredumb or ?
13:53 siXy for me gluster-heal eventually just dies and times out - but we're storing more files on glusterfs than it was really designed to hold
13:54 coredumb it's not a big deal as it's still in validation
13:54 coredumb but i wonder if that's how it's supposed to work :P
13:54 justyns joined #gluster
13:55 justyns joined #gluster
13:56 justyns joined #gluster
13:56 ninkotech_ joined #gluster
13:56 giannello coredumb, next time configure the brick in a _subfolder_ inside the mountpoint
13:56 giannello so if the mount gets lost, the folder will not exist and you'll not fuck up your / partition
13:57 coredumb giannello: well that's the case
13:57 nshaikh joined #gluster
13:57 coredumb ...
13:58 coredumb how can i ensure that it's clean ?
13:58 coredumb should i issue a rebalance ?
13:58 plarsen joined #gluster
13:58 coredumb or just wait for the CPU to calm down to mean healing is finished?
13:58 lalatenduM joined #gluster
13:59 giannello can't really help you with that - I've had a similar problem loooong ago, can't remember how I fixed it
13:59 coredumb i feel a bit blind ...
13:59 giannello but yes, after that I moved every brick to a subfolder of the mountpoint :D
13:59 ninkotech_ joined #gluster
14:00 giannello oh, wait
14:01 giannello AFAIR -> created a temporary folder, created the FS, replaced the "broken" brick with the temporary partition, wipe the broken brick, replace again, heal
14:01 giannello given enough replicas, that should work
14:01 coredumb oh i just wiped and remount :D
14:02 coredumb two nodes replica :O
14:03 giannello the problem in this case is that the brick process is alive and working during the filesystem switch
14:03 giannello that's why I used a temporary one
14:03 shubhendu joined #gluster
14:03 giannello to make sure that _nothing_ was accessing my brick's mountpoint
14:03 coredumb yeah i saw it killing the processes when i remounted the fs
14:04 wushudoin| joined #gluster
14:04 giannello well, killing a process that way DURING a heal maybe it's not a good idea...
14:05 coredumb There are no active volume tasks
14:05 coredumb WTF is it doing then if there's no tasks ??
14:05 coredumb :P
14:05 giannello you never know
14:05 giannello clustering things is not an easy task
14:05 ninkotech_ joined #gluster
14:06 JoeJulian "coredumb> for some reasons i had one node of my replica which lost its gluster partitions and wrote on /" What?!?! With 3.4+ that should be impossible. Without the volume_id xattr glusterfsd should fail to serve that directory.
14:06 giannello (yeah, that's also true - I forgot about it)
14:07 coredumb JoeJulian: well i noticed my volume shared was 2GB instead of the supposed 450GB
14:07 coredumb got on both nodes and noticed fs were not mounted anymore and i had mountpoints dirs filled with .glusterfs/xxxx
14:08 coredumb on 3.5.0
14:08 JoeJulian They had to have been not mounted when you created the volume. That's the only possibility.
14:08 elico joined #gluster
14:09 JoeJulian Well, not the only possibility. The other possibility is that you created the xattrs.
14:09 coredumb JoeJulian: nope they were
14:09 coredumb nope i didn't
14:09 rwheeler joined #gluster
14:09 coredumb though i actually did some volume start force
14:09 JoeJulian Then it's magic and you'll need to hire a wizard.
14:09 coredumb and i bet they were already not mounted
14:12 coredumb anyway i'll take more care next time :O
14:12 coredumb how can i monitor what's happening and hammering my CPUs ?
14:13 coredumb glusterfsd taking 1 full core
14:16 xleo joined #gluster
14:18 failshell joined #gluster
14:20 Maya_ joined #gluster
14:32 JoeJulian coredumb: Check logs. If you really want to know and there's not enough information in the logs, you can increase the log level. If that's still not enough information, strace, gdb, and valgrind are the tools you can use to get that deep into it.
14:32 elico JoeJulian: what's the deal with the FW issues..
14:32 xleo joined #gluster
14:33 coredumb JoeJulian: well i found heal statitics
14:33 coredumb which gives a bunch of informations
14:33 coredumb i just find weird that status tasks doesn't return anything when there's a full heal running
14:34 JoeJulian elico: I'm not entirely sure what his actual problem is. One the one hand he complains that there's "hanging" but then his example is just gluster commands.
14:34 JoeJulian elico: Which, of course there's delays with gluster commands. One of his management daemons is gone and requests to that daemon will have to expire.
14:34 JoeJulian elico: but that shouldn't have any effect on filesystem access.
14:35 kdhananjay joined #gluster
14:36 bala joined #gluster
14:36 elico a sec
14:41 glusterbot New news from newglusterbugs: [Bug 1130888] Renaming file while rebalance is in progress causes data loss <https://bugzilla.redhat.co​m/show_bug.cgi?id=1130888>
14:43 kshlm joined #gluster
14:47 samsaffron___ joined #gluster
14:47 fyxim__ joined #gluster
14:47 johnmwilliams__ joined #gluster
14:47 jobewan joined #gluster
14:48 lezo__ joined #gluster
14:48 frankS2 joined #gluster
14:53 elico back
14:54 elico JoeJulian: this what I was suspecting..
14:54 elico I think he is mounting it over some law latency link..
14:54 xleo joined #gluster
14:59 elico s/law/low
15:02 hagarth joined #gluster
15:03 daMaestro joined #gluster
15:07 LHinson joined #gluster
15:14 LHinson1 joined #gluster
15:16 sprachgenerator joined #gluster
15:16 gomikemike hi
15:16 glusterbot gomikemike: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:16 gomikemike im trying to configure geo-replication but keep getting "SSL support is NOT enabled"
15:17 gomikemike is there a good how-to out there to follow?
15:18 gomikemike im running gluster 3.5
15:25 JoeJulian https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_geo-replication.md
15:25 glusterbot Title: glusterfs/admin_geo-replication.md at master · gluster/glusterfs · GitHub (at github.com)
15:34 JoeJulian gomikemike: Also, that INFO message is benign. It just means that you're not using SSL for your volume. That has nothing to do with geo-rep.
15:41 xleo_ joined #gluster
15:41 gomikemike JoeJulian: but i dont see anything at all on the other host
15:41 tururum joined #gluster
15:41 xleo_ joined #gluster
15:42 gomikemike i got tcpdump running and i dont see anything getting to it...so my geo-replication start command is failing
15:43 tururum Hi, all. I'm currently investigating gluster, and have concern about performance with replicated mode. Do I understand correctly, that when using AFR translator, client is the one, who makes actual write operations on different bricks?
15:46 kkeithley tururum: that's correct
15:47 B219561 joined #gluster
15:47 tururum kkeithley, thanks . And does it mean, that client first writes to one server, waits for ack, then writes to second server, waits for ack, and only then returns ack to client?
15:48 tururum or, client does writes in parallel?
15:51 kkeithley pretty much. the more replicas the slower your throughput, in general.
15:52 kkeithley IIRC correctly, they're parallelized, but bandwidth is bandwidth, you're going to consume double, or triple, in the case of replica 3
15:53 B21956 joined #gluster
15:57 dtrainor joined #gluster
15:58 tururum thanks, at least it's much better than I expected :)
15:58 Pupeno joined #gluster
15:59 mariusp joined #gluster
16:08 dtrainor joined #gluster
16:09 dtrainor joined #gluster
16:12 PeterA joined #gluster
16:12 ekuric joined #gluster
16:23 zerick joined #gluster
16:29 xoritor hmm..... so basically with gluster, if my logic is right and my mind is not 100% gone, the more "nodes" the better
16:29 xoritor just in general
16:30 xoritor ie... better to have 8 nodes with 2 bricks than 4 nodes with 4 bricks
16:30 xoritor or maybe 16 nodes each with 1 brick
16:34 xoritor anyone ever used glusterfs on atom processors?
16:34 xoritor something like this....   http://www.supermicro.com/products/​MicroBlade/module/MBI-6418A-T7H.cfm
16:34 glusterbot Title: Supermicro | Products | MicroBlade | MBI-6418A-T7H (at www.supermicro.com)
16:36 LHinson joined #gluster
16:36 xoritor if those can handle glusterfs that would be AWESOME
16:36 xoritor truely amazing
16:39 xoritor with 8  10Gig-E links and 28 blades and each blade having 4 nodes and 4 drives .... all at low(er) power use
16:40 semiosis better is highly subjective
16:40 xoritor yep
16:40 semiosis for me better to have less servers, because that would be wasteful
16:40 semiosis i mean, having more would be wasteful
16:41 xoritor by better i mean in general more fault tolerant, less prone to data loss, easier to deal with updates/upgrades, and higher throughput
16:42 semiosis higher throughput certainly
16:42 semiosis the rest depends on other things
16:42 mariusp joined #gluster
16:42 xoritor but i have HIGH cpu use on my quad core xeon systems and not sure if those 8 core atoms could do it
16:43 xoritor but dang... 112 nodes would make for some high throughput and could possibly spread load
16:44 semiosis clients would be maintaining 112 tcp connections :)
16:44 xoritor 28 blades at 4 nodes per blade is 112 blades
16:44 xoritor err... s/blades$/nodes/
16:44 semiosis your samba proxies that is
16:45 xoritor not necessarily ... maybe have the higher power systems outside of this do samba in ctdb... (systems i already have)
16:46 xoritor then they only have to do 2-3 (depending) and have the others do glusterfs over 10G
16:47 xoritor ie... spread load
16:47 xoritor if they can maintain it though.... i could have some of them say 10 of them do samba and then some of them do the rest
16:48 xoritor with that many nodes there is a lots of leeway
16:48 xoritor the question really is can that processor keep up with glusterfs
16:49 xoritor or is glusterfs too demanding for the little guy
16:49 xoritor heck that cpu even had vt-x and vt-d
16:49 xoritor http://ark.intel.com/products/77987
16:49 glusterbot Title: ARK | Intel Atom™ Processor C2750 (4M Cache, 2.40 GHz) (at ark.intel.com)
16:51 xoritor http://cpuboss.com/cpus/Intel-Xe​on-E3-1220V2-vs-Intel-Atom-C2750
16:56 xoritor basically i am going over all of this as i am fitting together an infrastructure
16:56 xoritor trying to build one i want and price it all out
16:57 xoritor my current setup is not going to last and my testbed shows me i need to add more more more more more nodes
16:57 xoritor i need more nodes and more ram
16:57 xoritor ;-)
16:58 xoritor obviously i am looking to keep costs as low as possible, but mostly i want to keep power use and heat generation as low as possible as all of this is in an office with ME
16:59 xoritor i have been looking into the dell vrtx setups ... anyone using those?
17:00 eshy joined #gluster
17:00 gomikemike cant get the geo-rep going => gluster volume geo-replication fnrw-vol awslxglstutwq02.qa.aws:/fnrw-vol create push-pem
17:00 gomikemike [glusterd-geo-rep.c:4083:glusterd_get_slave_info] 0-: Invalid slave name
17:00 LHinson joined #gluster
17:00 toti joined #gluster
17:00 edward1 joined #gluster
17:00 mojibake joined #gluster
17:00 Humble joined #gluster
17:00 foster joined #gluster
17:00 bjornar joined #gluster
17:00 kkeithley joined #gluster
17:00 semiosis joined #gluster
17:00 sauce joined #gluster
17:01 gomikemike but the name slave name IS resolvable
17:02 Pupeno joined #gluster
17:02 bene2 joined #gluster
17:02 DJClean joined #gluster
17:02 Intensity joined #gluster
17:02 pradeepto joined #gluster
17:03 pradeepto joined #gluster
17:03 Intensity joined #gluster
17:03 DJClean joined #gluster
17:03 bene2 joined #gluster
17:03 Pupeno joined #gluster
17:03 sman joined #gluster
17:03 samppah joined #gluster
17:03 fim joined #gluster
17:03 Slasheri joined #gluster
17:03 tomased joined #gluster
17:03 lanning joined #gluster
17:03 drajen joined #gluster
17:03 partner joined #gluster
17:03 purpleidea joined #gluster
17:03 neoice joined #gluster
17:03 xavih joined #gluster
17:03 eightyeight joined #gluster
17:03 Moe-sama joined #gluster
17:03 Chr1s1an_ joined #gluster
17:03 codex joined #gluster
17:03 doekia joined #gluster
17:03 mjrosenb joined #gluster
17:03 T0aD joined #gluster
17:03 Guest44047 joined #gluster
17:03 gomikemike joined #gluster
17:03 Nowaker joined #gluster
17:03 atrius` joined #gluster
17:03 prasanth|brb joined #gluster
17:03 lava joined #gluster
17:03 tobias- joined #gluster
17:03 the-me joined #gluster
17:03 radez_g0n3 joined #gluster
17:03 churnd joined #gluster
17:03 cicero joined #gluster
17:03 gehaxelt joined #gluster
17:03 l0uis joined #gluster
17:03 ccha2 joined #gluster
17:03 eclectic joined #gluster
17:03 SteveCooling joined #gluster
17:03 mibby joined #gluster
17:03 marcoceppi joined #gluster
17:03 abyss^^_ joined #gluster
17:03 weykent joined #gluster
17:03 Ramereth joined #gluster
17:03 cultavix joined #gluster
17:03 JamesG joined #gluster
17:03 lkoranda joined #gluster
17:03 edong23_ joined #gluster
17:03 m0zes joined #gluster
17:03 swc|666 joined #gluster
17:03 Gugge joined #gluster
17:03 charta joined #gluster
17:03 portante joined #gluster
17:03 primeministerp joined #gluster
17:03 Lee- joined #gluster
17:03 oxidane_ joined #gluster
17:03 VerboEse joined #gluster
17:03 sickness joined #gluster
17:03 d4nku_ joined #gluster
17:03 apscomp joined #gluster
17:03 SmithyUK joined #gluster
17:03 Diddi joined #gluster
17:03 Dave2 joined #gluster
17:03 johnmark joined #gluster
17:03 JoeJulian joined #gluster
17:03 RobertLaptop joined #gluster
17:03 stickyboy joined #gluster
17:03 huleboer joined #gluster
17:03 toordog_wrk joined #gluster
17:03 nated joined #gluster
17:03 fubada joined #gluster
17:03 necrogami joined #gluster
17:03 RioS2 joined #gluster
17:03 hflai joined #gluster
17:03 tg2 joined #gluster
17:03 sage joined #gluster
17:03 d-fence joined #gluster
17:03 JonathanD joined #gluster
17:03 SpComb joined #gluster
17:03 hybrid5121 joined #gluster
17:03 crashmag joined #gluster
17:03 al joined #gluster
17:03 kalzz joined #gluster
17:03 Norky joined #gluster
17:03 tty00 joined #gluster
17:03 rturk|afk joined #gluster
17:03 JordanHackworth_ joined #gluster
17:03 johnnytran joined #gluster
17:03 ultrabizweb joined #gluster
17:03 sadbox joined #gluster
17:03 kke joined #gluster
17:03 cyberbootje joined #gluster
17:03 yosafbridge joined #gluster
17:03 ninjabox1 joined #gluster
17:03 osiekhan1 joined #gluster
17:03 vincent_vdk joined #gluster
17:03 Zordrak joined #gluster
17:03 txbowhun1er joined #gluster
17:03 eryc joined #gluster
17:03 VeggieMeat joined #gluster
17:03 Kins joined #gluster
17:03 msvbhat joined #gluster
17:03 stigchristian joined #gluster
17:03 torbjorn__ joined #gluster
17:03 saltsa_ joined #gluster
17:03 nixpanic_ joined #gluster
17:03 ndevos joined #gluster
17:03 toordog joined #gluster
17:03 coredumb joined #gluster
17:03 fsimonce joined #gluster
17:03 Bardack joined #gluster
17:03 mikedep333 joined #gluster
17:03 jvandewege joined #gluster
17:03 JustinClift joined #gluster
17:03 misuzu joined #gluster
17:03 cfeller joined #gluster
17:03 Nuxr0 joined #gluster
17:03 khanku joined #gluster
17:03 sijis joined #gluster
17:03 glusterbot joined #gluster
17:03 [o__o] joined #gluster
17:03 dblack joined #gluster
17:03 _NiC joined #gluster
17:03 jbrooks joined #gluster
17:03 nage joined #gluster
17:03 ThatGraemeGuy joined #gluster
17:03 georgeh joined #gluster
17:03 capri joined #gluster
17:03 mkzero_ joined #gluster
17:03 Peanut joined #gluster
17:03 uebera|| joined #gluster
17:03 clutchk joined #gluster
17:03 delhage joined #gluster
17:03 XpineX__ joined #gluster
17:03 tru_tru joined #gluster
17:03 wgao joined #gluster
17:03 atrius joined #gluster
17:03 xoritor joined #gluster
17:03 AaronGr joined #gluster
17:03 hchiramm__ joined #gluster
17:03 coredump joined #gluster
17:03 juhaj_ joined #gluster
17:03 ttk joined #gluster
17:03 klaas joined #gluster
17:03 ghenry joined #gluster
17:03 R0ok_ joined #gluster
17:03 side_control joined #gluster
17:03 Freman joined #gluster
17:03 ackjewt joined #gluster
17:03 gothos joined #gluster
17:03 dastar joined #gluster
17:03 Alex joined #gluster
17:03 morse joined #gluster
17:03 siXy joined #gluster
17:03 verdurin joined #gluster
17:03 Rydekull joined #gluster
17:03 pdrakewe_ joined #gluster
17:03 ws2k3 joined #gluster
17:03 ron-slc joined #gluster
17:03 skippy joined #gluster
17:03 and` joined #gluster
17:03 schrodinger joined #gluster
17:03 msp3k joined #gluster
17:03 Debolaz joined #gluster
17:03 siel joined #gluster
17:03 twx joined #gluster
17:03 jezier joined #gluster
17:03 masterzen joined #gluster
17:03 Andreas-IPO joined #gluster
17:03 guntha_ joined #gluster
17:03 andreask joined #gluster
17:03 haomaiw__ joined #gluster
17:03 Philambdo joined #gluster
17:03 social joined #gluster
17:03 harish joined #gluster
17:03 DV__ joined #gluster
17:03 vimal|brb joined #gluster
17:03 sac`away joined #gluster
17:03 bfoster joined #gluster
17:03 recidive joined #gluster
17:03 Slashman joined #gluster
17:03 LebedevRI joined #gluster
17:03 bennyturns joined #gluster
17:03 tdasilva joined #gluster
17:03 kodapa joined #gluster
17:03 plarsen joined #gluster
17:03 lalatenduM joined #gluster
17:03 wushudoin| joined #gluster
17:03 ninkotech_ joined #gluster
17:03 kdhananjay joined #gluster
17:03 kshlm joined #gluster
17:03 samsaffron___ joined #gluster
17:03 fyxim__ joined #gluster
17:03 johnmwilliams__ joined #gluster
17:03 jobewan joined #gluster
17:03 lezo__ joined #gluster
17:03 frankS2 joined #gluster
17:03 daMaestro joined #gluster
17:03 sprachgenerator joined #gluster
17:03 B21956 joined #gluster
17:03 dtrainor joined #gluster
17:03 ekuric joined #gluster
17:03 zerick joined #gluster
17:03 eshy joined #gluster
17:03 LHinson joined #gluster
17:03 toti joined #gluster
17:03 edward1 joined #gluster
17:03 mojibake joined #gluster
17:03 Humble joined #gluster
17:03 foster joined #gluster
17:03 bjornar joined #gluster
17:03 kkeithley joined #gluster
17:03 semiosis joined #gluster
17:03 sauce joined #gluster
17:03 xoritor gomikemike, you can ping it?
17:03 xoritor gomikemike, gluster peer info
17:03 xoritor gomikemike, i am not familiar with geo replication
17:03 mariusp joined #gluster
17:03 xoritor gomikemike, you still here?
17:04 xoritor gomikemike, get lost in the split?
17:08 gomikemike yes
17:09 gomikemike they are in diff locations
17:09 gomikemike not part of the cluster
17:10 gomikemike i have 1 (2 node) cluster on the (AWS)East and i need it to replicate to my other (2 node) cluster on (AWS)West
17:10 gomikemike brb
17:10 gomikemike food run
17:10 xoritor k
17:11 xoritor im looking at a few options obviously... any advice or ideas here would be welcome.
17:16 dgandhi joined #gluster
17:16 ramon_dl joined #gluster
17:23 m0rph joined #gluster
17:23 m0rph hello
17:23 glusterbot m0rph: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:25 m0rph Ok, so I got GlusterFS 3.4 on CentOS with a distributed replica setup. I've noticed that some peers are disconnected from each other. Some are connected one way but not the other. When I did "gluster peer probe <othermachine>" from one of the machines missing peers, it probes with success but doesn't reconnect. When I did "gluster peer status" there are still a few that are Disconnected.
17:25 m0rph I'm wondering if it's safe to detach those peers and re-probe
17:26 m0rph or if I should do something else.
17:26 gomikemike back
17:27 m0rph (Oh and if it makes a difference, I'm on Google Cloud Compute)
17:27 xoritor m0rph, make sure your host names resolve every direction if using hostnames
17:27 msmith__ joined #gluster
17:28 xoritor m0rph, also check time and make sure you are close in time (ntp setup helps)
17:28 JoeJulian Also check iptables
17:28 xoritor m0rph, make sure iptables is not in the way
17:28 xoritor exactly
17:28 xoritor or selinux
17:28 JoeJulian consult the star charts, make sure the planets are in alignment...
17:28 m0rph They shouldn't be. Nothing has really changed. Google had a minor hiccup last night where some machines rebooted, but that was it.
17:29 JoeJulian Did their ip addresses change?
17:29 m0rph I think so, for some
17:29 m0rph when I probe I used hostnames though
17:29 m0rph I think
17:29 JoeJulian That goes back to making sure the hostnames resolve correctly everywhere.
17:29 m0rph Does it matter which direction you probe from?
17:29 JoeJulian No
17:30 m0rph If I probe node2 from node1, did node2 grab "node1" or node1's IP?
17:30 JoeJulian ... but.... when you're first creating your pool, you have to probe the server you started with by name from some other server or it won't know its name.
17:31 JoeJulian Which kind-of answers the question you just asked... :D
17:31 JoeJulian ip
17:31 m0rph Aha!
17:31 JoeJulian It's in the docs, but it gets missed all the time.
17:31 m0rph And, so... if I re-probe the same hostname, is it smart enough to realise the IP has changed?
17:31 m0rph I mean, a hostname again.
17:31 m0rph whose IP changed
17:31 m0rph I don't have to detach, right?
17:32 JoeJulian yes because the uuid will match
17:32 m0rph ok
17:32 xoritor its best just to have aliens probe your brain when you wake up
17:32 xoritor then you are good to go
17:32 xoritor hey JoeJulian
17:33 m0rph Ok, I think I got it now. Thanks guys :)
17:33 m0rph I got one or two more quick dumb questions
17:33 m0rph How safe is it to resize a mountpoint that a brick is served from?
17:34 xoritor you have any thoughts on whether or not something like this could handle glusterfs .... http://www.supermicro.com/products/​MicroBlade/module/MBI-6418A-T7H.cfm
17:34 glusterbot Title: Supermicro | Products | MicroBlade | MBI-6418A-T7H (at www.supermicro.com)
17:34 JoeJulian m0rph: We do it all the time.
17:34 xoritor m0rph, yea do it lots
17:34 m0rph What happens with a replica count of 2 with brick1 and brick2, and I resize brick1's mountpoint up?
17:34 JoeJulian xoritor: I'll give that even annoying answer: depends on your use case.
17:34 JoeJulian s/even/ever/
17:34 glusterbot What JoeJulian meant to say was: xoritor: I'll give that ever annoying answer: depends on your use case.
17:34 m0rph Does it know what to do with the new space? Is it important I do both simultaneously?
17:34 xoritor lol
17:35 xoritor what i mean is do you think glusterfs will be "too much" for those atom cpus to handle
17:35 JoeJulian m0rph: The bricks just get bigger. Nothing changes in the way gluster manages file placements or anything.
17:36 xoritor m0rph, you SHOULD resize brick2 also... but nothing happens
17:36 JoeJulian xoritor: I started out running it on 32bit xeons.
17:36 sputnik13 joined #gluster
17:36 kkeithley xoritor: there are people running GlusterFS on raspberry pi. It works.
17:36 JoeJulian xoritor: There are people running gluster on Raspberry Pi.
17:36 xoritor man... thinking of getting a new setup and going with some of those blades
17:36 JoeJulian jinx
17:37 kkeithley snap
17:37 xoritor lol
17:37 xoritor thats a lot of nodes able to be in a small space
17:37 semiosis that's uncanny
17:37 xoritor it is uncanny
17:39 dtrainor joined #gluster
17:43 gomikemike so, my 2 clusters dont need to be awayre of each other as peers for geo-replication to work, right?
17:45 m0rph xoritor: Ok, great! I will just resize both mountpoints around the same time. Glad to hear it doesn't freak out hardcore.
17:46 failshell joined #gluster
17:46 m0rph Another dumb question: Does detaching a peer imply the removal of its bricks?
17:46 m0rph Not that I plan to anytime soon, but I'm curious
17:47 xoritor gomikemike, no idea... you have to get the brains to answer that one
17:47 m0rph I know the remove-brick and replace-brick procedure but I wonder what happens if a peer gets removed. Does the brick disappear from the system?
17:47 JoeJulian gomikemike: no
17:47 m0rph sorry, if a peer gets detached, not removed
17:48 JoeJulian m0rph: You cannot detach a peer if it has bricks in use.
17:48 xoritor m0rph, i dont think so... not sure though... if i am going to remove it i remove the bricks first then the node
17:48 xoritor more to the point as JoeJulian said... you cant
17:49 gomikemike so, im trying to replicate a volume from cluster1 to cluster2, i have opened ssh and done the ssh-configuration.
17:51 m0rph Ok, good to know. Thanks.
17:52 hagarth joined #gluster
17:52 pradeepto joined #gluster
17:52 gomikemike and i know the hosts can resolve the gluster hosts
17:53 JoeJulian ~pasteinfo | gomikemike
17:53 glusterbot gomikemike: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:54 mariusp joined #gluster
17:58 _Bryan_ joined #gluster
18:00 gomikemike k
18:01 jmarley joined #gluster
18:02 gomikemike https://gist.github.com/To​kynet/55e5235da2859a7432be
18:02 glusterbot Title: gluster geo (at gist.github.com)
18:04 JoeJulian Odd... there's nothing in there about geo-replication.
18:07 gomikemike exactly
18:07 gomikemike the command fails immediatly
18:07 gomikemike [glusterd-geo-rep.c:4083:glusterd_get_slave_info] 0-: Invalid slave name
18:07 gomikemike but the nodes can resolve all the names
18:08 JoeJulian What's the command you're trying?
18:09 * JoeJulian remembers seeing that in a previous life...
18:14 JoeJulian gomikemike: https://botbot.me/freenode/gluster/​2014-06-27/?msg=17051840&amp;page=3
18:14 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
18:14 gomikemike gluster volume geo-replication fnrw-vol awslxglstutwq02.qa.aws:/mnt/bricks/lv-FNRW/ start
18:16 JoeJulian Ah, yes. There's your problem. It should be "gluster volume geo-replication fnrw-vol awslxglstutwq02.qa.aws:/fnrw-ct-vol start"
18:16 gomikemike oh wait, wrong buffer
18:16 gomikemike i know i saw about the trailing /
18:16 JoeJulian You're replicating from the source volume to the destination volume. As with every other case, you should never access a brick without going through GlusterFS.
18:17 gomikemike JoeJulian: looking up latest command that im trying
18:21 m0rph Random question.. When doing a rebalance, the "skipped" count... Why would it skip any files?
18:27 gomikemike JoeJulian: updated gist https://gist.github.com/To​kynet/55e5235da2859a7432be
18:27 glusterbot Title: gluster geo (at gist.github.com)
18:31 recidive left #gluster
18:36 tom[] joined #gluster
18:38 JoeJulian m0rph: Because the destination for which the hash value of the filename was supposed to be mapped to was more full than the brick that it currently resides on. You can override that behavior with "force".
18:42 glusterbot New news from newglusterbugs: [Bug 1077406] Striped volume does not work with VMware esxi v4.1, 5.1 or 5.5 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1077406>
18:42 m0rph Ah, ok. Sounds reasonable.
18:42 gomikemike JoeJulian: got a chance to look at the new gist?
18:45 jmarley joined #gluster
18:47 LHinson joined #gluster
18:49 JoeJulian gomikemike: That's a wierd error: "Unable to store slave volume name"
18:49 JoeJulian though I suppose it's just a result of the invalid slave name to begin with.
18:50 gomikemike so, the slave is just the "target" to where the sync will happen, right?
18:51 gomikemike there is no special configuration to set it as "slave"?
18:57 chirino joined #gluster
18:57 JoeJulian right
19:05 gomikemike trying to find a way post a drawing i created for this setup, should help understand my issue
19:06 gomikemike so far i can only get to gist, and cant put an image there
19:06 theron joined #gluster
19:12 semiosis imgur?
19:14 gomikemike wow
19:14 gomikemike that is not blocked
19:14 gomikemike semiosis: +10
19:15 gomikemike crap, image background is clear and background is black...just like fonts
19:15 gomikemike http://imgur.com/Wz0TYIr
19:17 gomikemike i put the commands that im using to start the geo-replication on the image
19:17 gomikemike JoeJulian: http://imgur.com/Wz0TYIr
19:17 glusterbot Title: imgur: the simple image sharer (at imgur.com)
19:20 gomikemike i really need to finish this today
19:20 gomikemike so all and any help is much appreciated
19:22 mariusp joined #gluster
19:27 JoeJulian wait a minute... You're trying to geo-replicate the volume to itself.
19:28 semiosis two-way geo-replication?
19:29 gomikemike no, there is a volume on the west called the same, im trying to replicate the contenc from fnrw-vol (on east) to fnrw-vol on West
19:29 gomikemike well, afaik there is no 2 way geo-rep
19:29 semiosis ok right
19:29 gomikemike so im trying to replicate from gluster1-e to gluster1-w
19:30 gomikemike East => West
19:30 gomikemike then im trying to do gluster2-w to gluster2-w
19:30 gomikemike West => East
19:31 JoeJulian Ok, but your volume shows your bricks named "awslxglstutwq01.qa.aws" and "awslxglstutwq02.qa.aws", ergo that's also the names of your servers. Your geo-rep slave listed awslxglstutwq02.qa.aws which is also one of your bricks.
19:32 JoeJulian I suspect you're going to need fqdn brick names if you want to address east and west from either domain.
19:37 gomikemike so, i thought when i was creating the volume i needed to provide the server:volume
19:37 gomikemike is there a diff way to name the bricks?
19:38 JoeJulian From any one server, you'll want to be able to ping any other server by hostname.
19:38 gomikemike yes, i can do that now (well sortof) ping is disabled in AWS
19:39 gomikemike i can telnet to the brick ports via hostname
19:39 gomikemike so, what would be an example of fqdn brick name?
19:41 gomikemike awslxglstutwq02.qa.aws:/fnrw-vol
19:41 JoeJulian imho... awslxglstutwq01.qa.east.aws, awslxglstutwq01.qa.west.aws
19:41 jmarley joined #gluster
19:42 gomikemike our zone is aws.
19:42 glusterbot New news from newglusterbugs: [Bug 1143039] Memory leak in posix xattrop <https://bugzilla.redhat.co​m/show_bug.cgi?id=1143039>
19:42 gomikemike should i have created the bricks with just the hostname?
19:43 JoeJulian Let's say I'm on awslxglstutwq01.qa.aws in west. If I ping awslxglstutwq01.qa.aws I'm going to be pinging myself. How can I ping awslxglstutwq01.qa.aws in east?
19:43 JoeJulian Even if using shortnames: awslxglstutwq01.west and awslxglstutwq01.east maybe?
19:43 gomikemike i know the names suck but if you notice, there is W on the hostname
19:43 gomikemike or an E for east
19:44 JoeJulian Ah, I did miss that.
19:44 gomikemike awslxglstutwq01 and awslxglstuteq01
19:44 semiosis traditionally such a thing would go in the domain part, rather than the host part, of the name
19:44 JoeJulian I thought the volume info and the geo-rep command were from the same server.
19:44 semiosis as JoeJulian pointed out already
19:44 semiosis ...
19:44 * semiosis gbtw
19:44 JoeJulian hehe
19:45 gomikemike no, it looks that way cause im doing my best to keep them identical
19:45 gomikemike gbtw?
19:45 * semiosis goes back to work
19:47 gomikemike ahhh
19:47 JoeJulian Crap. I have no idea and I need to get this salt config finished by 2:00 (MST). Depending on what your definition of "Today" is, I can probably help later.
19:53 vu joined #gluster
19:55 gomikemike salt stack?
19:55 gomikemike nice
19:55 JoeJulian yep
19:55 gomikemike we are puppet shop :)
20:01 failshel_ joined #gluster
20:03 andreask joined #gluster
20:13 jdarcy joined #gluster
20:15 gomikemike JoeJulian: i've updated the gist with the command used to create the replicated volume
20:15 gomikemike JoeJulian: https://gist.github.com/To​kynet/55e5235da2859a7432be
20:15 glusterbot Title: gluster geo (at gist.github.com)
20:31 jmarley joined #gluster
20:37 side_control joined #gluster
20:38 AaronGr joined #gluster
20:51 gomikemike still hoping...
20:51 Maya_ joined #gluster
20:52 ScottR joined #gluster
20:53 Maya_ Can anyone confirm that “replica” refers to how many copies exist of brick in a volume, and not how many copies of files exist in a volume?
20:55 ScottR Question: We use a program called Autodesk Revit. It allows live worksharing(multiple people in the same file at the same time) on a standard file server(read/writes/locking). I’m curious if anyone has use Gluster for this? Is Gluster a good substitute to Globalscape WAFS where other servers in different domains can link up without VPNs?
20:58 skippy Maya_: "replica" is how many copies of a brick.  Whatever files are on that brick are replicated X number of times.
20:58 jmarley joined #gluster
21:02 Maya_ @skippy: Thanks for clarifying. I completely misunderstood the documentation and attempted a "replica 2" on a 3-brick volume...
21:03 pkoro__ joined #gluster
21:04 semiosis Maya_: copies of bricks is the same thing as copies of files, because a brick is just a bunch of files
21:08 Maya_ semiosis: Right, except that while it may make sense to have a copy of 2 files on a 3-brick volume, it isn't possible since the bricks as a whole are being replicated, not individual files.
21:09 semiosis ok
21:12 semiosis ScottR: never heard of WAFS before but it looks like a very different thing from glusterfs
21:13 semiosis ScottR: glusterfs does support locking & multiple writers
21:14 semiosis ScottR: but currently doesnt support multi-master geo replication, which seems to be a big selling point for WAFS
21:15 semiosis ScottR: you could try setting up a glusterfs volume & mounting it with NFS clients over a VPN and see if the nfs locking works well enough for revit collaboration
21:15 semiosis iirc gluster nfs supports locking since version 3.3
21:15 semiosis you'd probably want to disable attribute caching in your nfs client
21:15 semiosis as is common practice for multiple writers
21:15 ScottR semiosis: Alright... so "multi-master geo replication" is a new term I havent used in my search for a Open solution to replace Globalscape
21:16 semiosis ScottR: it's a hard problem
21:16 semiosis especially for a general purpose filesystem
21:17 ScottR semiosis: I see. I was also looking at OpenAFS.  Not sure how they compare.  Have you heard of / have thoughts?
21:17 semiosis nope
21:18 semiosis so the way WAFS handles multiple writers across WAN is to give an exclusive lock to one writer & blocking all the rest
21:20 semiosis according to http://www.globalscape.com/wafs/faq.aspx
21:20 glusterbot Title: FAQ's for WAFS (at www.globalscape.com)
21:21 ScottR semiosis: I thought that was normal for WAFS applications?
21:22 semiosis perhaps.  i've never really looked into it
21:28 jdarcy joined #gluster
21:33 dtrainor joined #gluster
21:33 gomikemike dying
21:34 semiosis gomikemike: wish i could help but i really dont know much about setting up geo-rep
21:35 gomikemike i keep getting ppl standing over myshoulder asking about this...
21:36 gomikemike i wonder if the gluster script that starts/sets geo-rep is borked
21:36 semiosis what gluster version & linux distro is this?
21:37 gomikemike this => (-->/usr/lib64/libglusterf​s.so.0(dict_set_str+0x1c) [0x7ff4ba24845c]))) 0-dict: value is NULL
21:37 glusterbot gomikemike: ('s karma is now -29
21:37 gomikemike rhel 6.5glusterfs-server-3.5.0-2.el6
21:42 glusterbot New news from newglusterbugs: [Bug 1073763] network.compression fails simple '--ioengine=sync' fio test <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073763> || [Bug 1086493] [RFE] - Add a default snapshot name when creating a snap <https://bugzilla.redhat.co​m/show_bug.cgi?id=1086493> || [Bug 1086497] [RFE] - Upon snaprestore, immediately take a snapshot to provide recovery point <https://bugzilla.redhat.com/show_bug.cgi?i
21:50 glusterbot New news from resolvedglusterbugs: [Bug 1092620] Upgraded from 3.4.3 to 3.5.0, and ownership on mount point changed. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092620> || [Bug 1095971] Stopping/Starting a Gluster volume resets ownership <https://bugzilla.redhat.co​m/show_bug.cgi?id=1095971>
21:51 LHinson joined #gluster
22:02 elico joined #gluster
23:00 PeterA http://pastie.org/9566240
23:00 glusterbot Title: #9566240 - Pastie (at pastie.org)
23:01 PeterA what does the stale NFS means?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary