Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 jaank joined #gluster
00:27 bpap joined #gluster
00:41 sputnik13 joined #gluster
00:44 SOLDIERz joined #gluster
00:45 calisto joined #gluster
00:48 bpap This blog post has a link to a HOWTO article, but it 404
00:48 bpap http://redhatstorage.redhat.com/201​1/09/14/glusterfs-howto-nfs-perform​ance-with-fuse-client-redundancy/
00:49 bpap anyone know where to find the full article?
00:49 bpap JoeJulian especially^
00:50 JoeJulian Basically make your client a peer. It will then run the nfs service and you can nfs mount from localhost. There may be kernel memory deadlocks using that method though.
00:53 bpap ok. so i'm trying to see if Gluster can handle millions of ~16KB files efficiently. I see that NFS client is recommended for small files. that's why i asked.
00:54 JoeJulian "efficiently" depends on the use case.
00:54 JoeJulian If you're writing them, it'll be no more efficient.
00:55 bpap i think it's mostly write once, read many. very read-heavy.
00:55 JoeJulian If you're reading enough of them to exhaust memory, no. If there are popular reads, perhaps. But if that's the case, why aren't you caching at the front end instead of waiting 'till you hit storage?
00:58 bpap read-heavy across 100% of the volume, i should say. i don't think there are any hotspots in the data set.
00:59 JoeJulian Well then the only thing that NFS gains some users is FScache. That won't help your use case it seems.
01:00 lkthomas sup all
01:00 JoeJulian Tree your directory structure to keep directory listings to a relatively small size.
01:01 JoeJulian If your app is a java app, use libgfapi through the vfs.
01:01 bpap yes, that's what we're doing in our filer right now.
01:04 lkthomas guys, I am trying to start geo-replication, start didn't fail but status showing it's not running, any idea ? http://pastebin.com/Qrx4E8Gd
01:04 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
01:05 JoeJulian bpap: Out of curiosity, what kind of data is this: configuration? billing? media? ...?
01:05 bpap images. avatars.
01:05 bpap for a contact list
01:05 bpap when a user receives a phone call, we pull up the thumbnail for that contact.
01:05 JoeJulian Ah, cool.
01:06 bpap could be any user at any time, of course.
01:07 lkthomas anyone ?
01:07 JoeJulian I imagine those avatars are cached client-side so an eventually consistent client-side cache *could* be sufficient. That would reduce the load on storage enough that the eventuality could be kept to a reasonable time.
01:08 JoeJulian lkthomas: Did you start the volume?
01:08 lkthomas JoeJulian, start replication volume or the brick volume on both side ?
01:09 JoeJulian gluster volume start testvol1 and on the other end, testvol2.
01:09 lkthomas JoeJulian, yes, both started
01:10 JoeJulian lkthomas: ,,(pasteinfo) from both servers, please.
01:10 glusterbot lkthomas: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
01:11 bpap I'm not terribly well-versed on what the device does. i can tell you we have a single terabyte filer handling the requests. we need to replicate that to a second datacenter for redundancy.
01:11 bpap so i threw out the idea of using Gluster
01:11 bpap proposed* the idea
01:12 lkthomas JoeJulian, http://ur1.ca/jd7co
01:14 JoeJulian bpap: How in the world do you guys make money? :D
01:14 JoeJulian Looks too good to be true.
01:15 lkthomas JoeJulian, hmm?
01:16 bpap I can tell you but then I'd have to get you to sign an NDA and then kill you.
01:16 JoeJulian hehe
01:16 iPancreas haha
01:16 JoeJulian Oh, of course... the CIA.
01:16 JoeJulian They fund all communication technology.
01:16 lkthomas JoeJulian, any comment on my replication problem ?
01:16 JoeJulian lkthomas: that was "status" not "info" :P
01:17 bpap does anyone want to recommend some other distributed file system/database besides Gluster for 1 million+ 16KB files?
01:17 lkthomas JoeJulian, http://ur1.ca/jd7ep
01:17 JoeJulian bpap: swift
01:19 iPancreas swift is object storage
01:20 JoeJulian yep, and it can be configured to replicate to your multiple datacenters. Your dns server could be configured to direct the lookup to the DC closest to them, minimizing latency.
01:21 JoeJulian If you're trying to build a national, or even better an international system with consistent worldwide data points, swift is the way to do it.
01:21 bpap rad
01:21 JoeJulian imho
01:23 lkthomas JoeJulian, sorry, any idea ?
01:24 JoeJulian I'm not seeing anything obvious.
01:25 JoeJulian Check your logs on both servers. See if there's any clues there.
01:29 lkthomas JoeJulian, nothing special
01:29 JoeJulian Does status still say it's not started?
01:30 JoeJulian what version of gluster is that?
01:33 lkthomas still not started. glusterfs 3.5.3 built on Nov 18 2014 03:53:25
01:34 JoeJulian let me spin up a couple VMs and see if I can duplicate your issue.
01:35 lkthomas JoeJulian, please and thanks. Are you a developer in Gluster community ?
01:44 JoeJulian not a developer, just a know-it-all.
01:45 bala joined #gluster
01:45 lkthomas JoeJulian, I see
01:55 lkthomas JoeJulian, sup
01:56 JoeJulian working, please wait...
01:58 lkthomas JoeJulian, okay, thanks :)
02:08 haomaiwa_ joined #gluster
02:11 msciciel joined #gluster
02:16 julim joined #gluster
02:19 JoeJulian lkthomas: Well, it didn't fail the same way for me.
02:19 lkthomas sigh
02:19 lkthomas I recreate volume 3 times already
02:19 lkthomas still same
02:20 JoeJulian [2015-01-08 02:19:50.859739] E [resource(/data/testvol1/brick):207:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory
02:20 JoeJulian That's probably the same problem. Looks like a packaging issue.
02:22 harish joined #gluster
02:28 JoeJulian hmm, now that I'm using the pem, I do get "Not Started"
02:31 JoeJulian [2015-01-08 02:28:15.933542] E [resource(monitor):207:logerr] Popen: ssh> bash: /usr/libexec/glusterfs/gsyncd: No such file or directory
02:34 JoeJulian lkthomas: mkdir -p /usr/libexec/glusterfs/ && ln -s /usr/lib/x86_64-linux-gnu/glusterfs/gsyncd /usr/libexec/glusterfs/gsyncd
02:34 JoeJulian on both servers
02:34 JoeJulian hmm, well close anyway.
02:35 lkthomas JoeJulian, how did you get that message ?
02:35 JoeJulian Oh, it worked.
02:35 JoeJulian :D
02:35 JoeJulian /var/log/glusterfs/geo-replication/testvol1#
02:35 JoeJulian s/#//
02:35 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Check the logs for more informations.
02:36 JoeJulian shut up glusterbot.
02:36 lkthomas JoeJulian, did you turn on some sort of debug log ?
02:36 JoeJulian I've got to go do dinner. I'll either fix the packaging, or bug semiosis later.
02:37 nangthang joined #gluster
02:37 JoeJulian Nope, that's the standard log location
02:37 JoeJulian well, for a testvol1 volume anyway.
02:38 JoeJulian That's off a rackspace trusty image with 3.5.3 installed from the official ppa.
02:40 lkthomas LOL
02:40 lkthomas same error here, thanks
02:40 lkthomas that command should be done on all servers ?
02:42 theron joined #gluster
02:42 ira joined #gluster
02:45 lkthomas fuck! faulty, LOL
02:45 JoeJulian all servers, yes.
02:45 JoeJulian It's faulty for a minute, then Active.
02:45 lkthomas OH
02:45 lkthomas well, speechless, LOL
02:45 lkthomas JoeJulian, interesting, new folder didn't get sync
02:46 JoeJulian It's not instantaneous.
02:46 JoeJulian It runs on a scheduler.
02:46 lkthomas 2 minutes already
02:46 lkthomas what's the interval ?
02:46 JoeJulian I forget.
02:48 JoeJulian You created the folder thorough a fuse mount, right? Not directly on the brick.
02:48 lkthomas opppss
02:48 lkthomas I was creating directly on brick, shit
02:50 lkthomas I see
02:50 lkthomas now it's all good
02:51 lkthomas how is gluster server and client performance over WAN ?
02:52 JoeJulian It amplifies latency pretty well. If that suits your use case though...
02:53 lkthomas amplify ?!
02:53 lkthomas is it a good thing or bad  thing ?
02:54 JoeJulian Well, if you have 10ns latency, you'll have 30 (ish, that's not a firm number). If you have 500ms, it'll be at least a second and a half.
02:55 JoeJulian For operations establishing an FD. Once it's open it'll be closer to RTT.
02:55 lkthomas Wow
02:55 lkthomas JoeJulian, actually I am thinking to use btsync instead of Gluster
02:56 JoeJulian If that suits your use case better, then I'm all for it.
02:56 lkthomas thanks
02:56 lkthomas I see gluster is complicating things over here
02:56 JoeJulian Use whatever fits the need, I always say.
02:56 lkthomas :)
02:57 eka joined #gluster
02:59 DougBishop joined #gluster
03:00 bharata-rao joined #gluster
03:01 suman_d joined #gluster
03:01 bharata_ joined #gluster
03:02 eryc joined #gluster
03:02 eryc joined #gluster
03:07 rightonbro joined #gluster
03:07 B21956 left #gluster
03:08 nrcpts joined #gluster
03:25 kanagaraj joined #gluster
03:25 iPancreas joined #gluster
03:25 calisto joined #gluster
03:51 hagarth joined #gluster
03:52 itisravi joined #gluster
04:01 atinmu joined #gluster
04:03 T3 joined #gluster
04:03 MattJ_EC joined #gluster
04:06 ppai joined #gluster
04:07 side_control joined #gluster
04:08 shubhendu joined #gluster
04:16 dusmant joined #gluster
04:18 calisto joined #gluster
04:19 nbalacha joined #gluster
04:22 fandi joined #gluster
04:22 rafi1 joined #gluster
04:24 fandi joined #gluster
04:26 fandi joined #gluster
04:26 iPancreas joined #gluster
04:37 jiffin joined #gluster
04:37 anoopcs joined #gluster
04:41 ndarshan joined #gluster
04:44 jaank joined #gluster
04:50 soumya joined #gluster
04:57 glusterbot News from newglusterbugs: [Bug 1180015] reboot node with some glusterd glusterfsd glusterfs services. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1180015>
05:00 sakshi joined #gluster
05:02 bala joined #gluster
05:03 gem joined #gluster
05:03 Manikandan joined #gluster
05:04 anrao joined #gluster
05:04 spandit joined #gluster
05:04 smohan joined #gluster
05:09 kshlm joined #gluster
05:18 prasanth_ joined #gluster
05:18 gem joined #gluster
05:20 hagarth joined #gluster
05:21 kumar joined #gluster
05:22 RameshN joined #gluster
05:27 iPancreas joined #gluster
05:27 kdhananjay joined #gluster
05:32 soumya joined #gluster
05:36 Manikandan joined #gluster
05:36 nshaikh joined #gluster
05:45 anil joined #gluster
05:50 ramteid joined #gluster
05:51 atalur joined #gluster
05:52 kdhananjay joined #gluster
05:58 atalur joined #gluster
06:03 lalatenduM joined #gluster
06:09 raghu joined #gluster
06:12 ppai joined #gluster
06:27 iPancreas joined #gluster
06:32 spandit joined #gluster
06:32 hagarth joined #gluster
06:40 smohan joined #gluster
06:46 kdhananjay joined #gluster
06:47 saurabh joined #gluster
06:49 meghanam joined #gluster
06:50 nbalacha joined #gluster
06:51 gem joined #gluster
06:55 harish joined #gluster
06:57 SOLDIERz joined #gluster
06:57 basso_ joined #gluster
07:03 ndarshan joined #gluster
07:05 rgustafs joined #gluster
07:07 basso joined #gluster
07:12 ctria joined #gluster
07:15 atrius joined #gluster
07:20 jtux joined #gluster
07:27 side_control joined #gluster
07:28 iPancreas joined #gluster
07:31 ghenry joined #gluster
07:32 nbalacha joined #gluster
07:35 ppai joined #gluster
07:41 hagarth joined #gluster
07:45 ndarshan joined #gluster
07:46 rgustafs joined #gluster
07:51 nshaikh joined #gluster
07:54 deniszh joined #gluster
08:03 atrius joined #gluster
08:05 suman_d joined #gluster
08:16 hagarth joined #gluster
08:21 pdrakewe_ joined #gluster
08:22 SOLDIERz joined #gluster
08:24 Fen2 joined #gluster
08:28 iPancreas joined #gluster
08:32 atalur joined #gluster
08:32 hybrid512 joined #gluster
08:35 anoopcs joined #gluster
08:37 ppai joined #gluster
08:43 gothos morning :) I was just looking threw my .glusterfs directory looking for the cause of a warning and an `ls' in at least one of the .glusterfs/XX/YY gives me: ls: cannot access /data/brick0/brick/.glusterfs/2d/e2/2​de2052d-a5e8-4410-b322-fde88d94bb67: Too many levels of symbolic links
08:43 gothos can that savely me ignored or might that me a problem for gluster?
08:43 gothos s/me/be/
08:43 glusterbot What gothos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
08:45 [Enrico] joined #gluster
08:46 fsimonce joined #gluster
08:57 overclk joined #gluster
08:59 jvandewege joined #gluster
09:02 Slashman joined #gluster
09:21 ndarshan joined #gluster
09:21 aravindavk joined #gluster
09:27 Pupeno joined #gluster
09:27 LebedevRI joined #gluster
09:28 glusterbot News from newglusterbugs: [Bug 1180060] Missing repository metadata for Fedora/i686 systems <https://bugzilla.redhat.co​m/show_bug.cgi?id=1180060>
09:28 glusterbot News from newglusterbugs: [Bug 1180056] Implement AUTH_SHORT to improve credential/group caching on the bricks <https://bugzilla.redhat.co​m/show_bug.cgi?id=1180056>
09:29 iPancreas joined #gluster
09:36 T0aD joined #gluster
09:43 soumya joined #gluster
09:44 meghanam joined #gluster
09:58 ManD joined #gluster
09:58 glusterbot News from newglusterbugs: [Bug 1180070] [AFR] getfattr on fuse mount gives error : Software caused connection abort <https://bugzilla.redhat.co​m/show_bug.cgi?id=1180070>
09:58 glusterbot News from newglusterbugs: [Bug 1180073] [AFR] getfattr on fuse mount gives error : Software caused connection abort <https://bugzilla.redhat.co​m/show_bug.cgi?id=1180073>
09:59 ManD Hello
09:59 glusterbot ManD: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:59 badone joined #gluster
09:59 ManD Can anyone please help me with gluster configuration?
10:00 ManD I've launched the server from a working AMI already configured
10:00 ManD On AWS
10:01 ManD Can't start the gluster daemon
10:01 pdrakeweb joined #gluster
10:02 ManD gluster --debug has this output
10:02 ManD :glusterd_friend_find_by_hostname] 0-management: Unable to find friend:
10:02 ManD Its referring to an old host, one used by the server used to create the AMI
10:03 ManD I don't know where I have to make the config changes.
10:20 purpleidea joined #gluster
10:20 nueces joined #gluster
10:23 shaunm joined #gluster
10:23 spandit joined #gluster
10:28 glusterbot News from resolvedglusterbugs: [Bug 1180073] [AFR] getfattr on fuse mount gives error : Software caused connection abort <https://bugzilla.redhat.co​m/show_bug.cgi?id=1180073>
10:29 iPancreas joined #gluster
10:33 soumya_ joined #gluster
10:45 Elico joined #gluster
10:46 ninkotech_ joined #gluster
10:48 m0zes joined #gluster
10:59 iPancreas joined #gluster
11:02 eljrax joined #gluster
11:03 eljrax Hey, is it possible to flush the performance.cache on-demand? Say I wanted to set performance.cache-refresh-timeout really high, but be able to flush the cache when I know I've made changes to files?
11:03 [Enrico] joined #gluster
11:09 shubhendu joined #gluster
11:10 deepakcs joined #gluster
11:10 meghanam joined #gluster
11:19 _shaps_ left #gluster
11:20 samsaffron___ joined #gluster
11:20 samsaffron___ joined #gluster
11:24 SOLDIERz joined #gluster
11:34 bala joined #gluster
11:37 soumya joined #gluster
11:37 kkeithley1 joined #gluster
11:37 kkeithley1 left #gluster
11:38 kkeithley1 joined #gluster
11:38 monotek joined #gluster
11:39 shubhendu_ joined #gluster
11:48 Norky joined #gluster
11:49 social joined #gluster
11:52 purpleidea joined #gluster
11:53 SOLDIERz joined #gluster
11:53 meghanam joined #gluster
11:54 RameshN joined #gluster
11:59 chirino joined #gluster
12:03 bala joined #gluster
12:03 purpleidea joined #gluster
12:03 purpleidea joined #gluster
12:06 T3 joined #gluster
12:08 suman_d joined #gluster
12:08 iPancreas joined #gluster
12:09 hagarth joined #gluster
12:15 itisravi_ joined #gluster
12:16 itisravi__ joined #gluster
12:18 nueces joined #gluster
12:18 iPancreas joined #gluster
12:20 SOLDIERz joined #gluster
12:25 purpleidea joined #gluster
12:32 bala joined #gluster
12:32 soumya joined #gluster
12:33 ppai joined #gluster
12:48 ira joined #gluster
12:49 nangthang joined #gluster
12:50 MattJ_EC joined #gluster
12:51 nangthang joined #gluster
13:10 rafi1 joined #gluster
13:13 suman_d joined #gluster
13:19 iPancreas joined #gluster
13:27 SOLDIERz joined #gluster
13:28 Fen1 joined #gluster
13:28 rjoseph joined #gluster
13:29 glusterbot News from resolvedglusterbugs: [Bug 1075182] Despite glusterd init script now starting before netfs, netfs fails to mount localhost glusterfs shares in RHS 2.1 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075182>
13:34 SOLDIERz joined #gluster
13:36 tdasilva joined #gluster
13:41 _Bryan_ joined #gluster
13:46 B21956 joined #gluster
13:47 B21956 joined #gluster
13:49 bene joined #gluster
13:50 plarsen joined #gluster
14:02 virusuy joined #gluster
14:02 suman_d joined #gluster
14:03 shubhendu_ joined #gluster
14:07 nbalacha joined #gluster
14:08 nbalacha joined #gluster
14:15 c0m0 joined #gluster
14:19 iPancreas joined #gluster
14:21 theron joined #gluster
14:26 bala joined #gluster
14:39 kshlm joined #gluster
14:46 T3 joined #gluster
14:50 Elico left #gluster
14:59 rgustafs joined #gluster
15:01 dusmant joined #gluster
15:06 suman_d joined #gluster
15:08 krullie joined #gluster
15:14 lpabon joined #gluster
15:14 bennyturns joined #gluster
15:15 dgandhi joined #gluster
15:17 jmarley joined #gluster
15:20 iPancreas joined #gluster
15:21 hybrid5121 joined #gluster
15:21 bala1 joined #gluster
15:25 _Bryan_ joined #gluster
15:25 lmickh joined #gluster
15:25 suman_d joined #gluster
15:26 neofob joined #gluster
15:27 jobewan joined #gluster
15:30 nishanth joined #gluster
15:34 ninkotech joined #gluster
15:37 krullie joined #gluster
15:43 kanagaraj joined #gluster
15:44 smohan_ joined #gluster
15:48 nishanth joined #gluster
15:48 sputnik13 joined #gluster
15:48 virusuy mmm this is weird
15:49 virusuy mount -t glusterfs it's not working now
15:49 virusuy and was working yesterday
15:53 virusuy i was aware of $LC_NUMERIC bug, but in my case thats correctly set
15:55 monotek joined #gluster
15:59 ildefonso joined #gluster
16:02 calisto joined #gluster
16:11 T3 joined #gluster
16:16 nshaikh joined #gluster
16:20 lpabon_ joined #gluster
16:20 iPancreas joined #gluster
16:23 roost joined #gluster
16:23 theron joined #gluster
16:23 roost joined #gluster
16:24 anoopcs joined #gluster
16:36 jiffin joined #gluster
16:38 hagarth joined #gluster
16:39 soumya joined #gluster
16:52 plarsen joined #gluster
16:52 jmarley joined #gluster
17:00 glusterbot News from newglusterbugs: [Bug 1180231] glusterfs-fuse: Crash due to race in FUSE notify when multiple epoll threads invoke the routine <https://bugzilla.redhat.co​m/show_bug.cgi?id=1180231>
17:06 iPancreas joined #gluster
17:11 chirino joined #gluster
17:14 fubada purpleidea: i give up  :)
17:15 iPancreas joined #gluster
17:15 enear joined #gluster
17:22 partner ManD: so umm if some peer is gone maybe detach that? thought i am not at all sure what exactly is the problem or what kind of volume are we talking about
17:32 virusuy hi guys i'm using glusterfs-3.6.1 in a 2-node replica volume
17:32 virusuy but when i shut down one node, the mount ping (using gluster fuse) doesn't work anymore in the other
17:33 virusuy even though every node mount locally (127.0.0.1:/share)
17:39 lkoranda joined #gluster
17:46 neofob left #gluster
17:54 meghanam joined #gluster
17:55 lkoranda joined #gluster
17:56 partner virusuy: i'm sorry, i cannot understand your problem
17:56 Mexolotl joined #gluster
17:57 partner what is "mount ping" ?
17:58 partner the network.ping-timeout?
17:58 partner errors in logs?
17:58 Mexolotl Hi all, I want to have two servers storages replicated over the internet (two banana pi's) something like master-master-geo-replication is that possible with glusterfs and than yes how? :)
18:00 lalatenduM joined #gluster
18:02 fandi joined #gluster
18:03 5EXAAA3N6 joined #gluster
18:03 CyrilPeponnet Mexolotl http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/
18:04 CyrilPeponnet or from 3.2 http://www.gluster.org/community/doc​umentation/index.php/Gluster_3.2:_Re​plicated_Volumes_vs_Geo-replication
18:07 Mexolotl CyrilPeponnet: So what would you consider, I want to build two Ownclouds Servers running on two locations and are synchronized with each other, so I need to sync the data folder of owncloud and the DB
18:08 CyrilPeponnet it depends of latency and bandwith
18:08 CyrilPeponnet not sure that's a good choice for the db
18:08 morsik use mysql replication, or mysql cluster.
18:08 CyrilPeponnet you should implement db mirroring/clustering
18:09 morsik (but mysql cluster needs at least 3 nodes)
18:10 CyrilPeponnet for files, replica could do the trick but you have to know that once in replicated you client can fetch data from both bricks.
18:11 CyrilPeponnet maybe you are looking for replicated fs like DRDB instead
18:11 elico joined #gluster
18:12 SOLDIERz joined #gluster
18:14 chirino joined #gluster
18:14 CyrilPeponnet so to sum up, LB -> 2 instances of own cloud -> cluster db | dedicated replicated storage.
18:15 CyrilPeponnet it depends if you want to do active/passive of RR/geo balancing
18:19 elico1 joined #gluster
18:19 sputnik13 joined #gluster
18:28 lmickh joined #gluster
18:33 squizzi joined #gluster
18:33 nishanth joined #gluster
18:34 jaank joined #gluster
19:03 Pupeno_ joined #gluster
19:05 John_HPC joined #gluster
19:06 John_HPC I've been trying to do a fix layout. Gluster01(first server containing 6 bricks) takes about 20 days to complete. The other servers, Gluster02-Gluster06 die pretty quickly with this error (information lines omitted)
19:06 John_HPC http://paste.ubuntu.com/9694342/
19:10 enear joined #gluster
19:29 plarsen joined #gluster
19:30 squizzi joined #gluster
19:39 lalatenduM joined #gluster
19:43 partner just finished my fix-layout on 25-brick system, took 33 days
19:44 partner thought i didn't have any errors on the way
19:44 John_HPC mine completed but didn't fix the duplicated directories
19:44 neofob joined #gluster
19:44 John_HPC mine is a total of 18 bricks, with replication 18x2=36 layout
19:46 partner whats up with those couple of lookup failed lines on the log, is the dir present?
19:47 partner maybe its some split-brain situation that is just visible on the rebalance operation?
19:47 John_HPC ll /mnt/glustervol01/archive/bindata/Ft/Schu4/​Chemicals/ConfirmationScreen-old/20121001a/ total 12 drwxr-xr-x 2 insights insights 12288 Jun  6  2013 201105813
19:47 John_HPC the directory does exist
19:48 partner anything on the brick logs that hold that directory/content of it?
19:49 partner oh, damn, i need to background, will leave you to the hands of the experts
19:51 John_HPC Actually yes... gluster05:SU:/mnt/raidvol03/archive/bindata​/Ft/Schu4/Chemicals/ConfirmationScreen-old/ ll total 12 drwxr-xr-x  5 insights insights 4096 May 28  2013 20111031a drwxr-xr-x  6 insights insights 4096 May 28  2013 20120806a drwxr-xr-x 14 insights insights 4096 Dec 31 10:17 20121001a
19:51 John_HPC it has more sub directories than what the mount point is serving
19:53 John_HPC the mount point is now showing more directories
20:12 coredump joined #gluster
20:16 lalatenduM joined #gluster
20:16 theron joined #gluster
20:23 squizzi joined #gluster
20:24 virusuy joined #gluster
20:24 virusuy joined #gluster
20:28 DV joined #gluster
20:28 squizzi_ joined #gluster
20:58 lpabon joined #gluster
21:11 theron joined #gluster
21:13 badone joined #gluster
21:16 sputnik13 joined #gluster
21:33 iPancreas joined #gluster
21:50 daddmac joined #gluster
21:50 jackdpeterson joined #gluster
21:56 jackdpeterson http://pastebin.com/Ac4sCMdr -- Looking for some performance thoughts. Currently rsyncing a large number of directories/files across two pairs of gluster servers. None of the servers are CPU bound, IO bound, or using near their capacity networkwise as far as I can tell. Yet somehow things just seem to be particularly sluggish for writes.
21:56 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:57 calisto joined #gluster
21:58 Staples84 joined #gluster
22:03 guizarea joined #gluster
22:07 guizarea Good afternoon, We are running Gluster 3.6.1.  Since we upgraded to version 3.6.1 about three weeks ago, the network utilization between the two gluster servers has increased.  both servers are on different sites using different subnets
22:08 guizarea we have georeplication running between the sites.
22:09 purpleidea joined #gluster
22:11 JoeJulian jackdpeterson: With fuse, the context switches are probably the bottleneck. If you could portion out your copy using multiple clients that may help. Also, be sure to use --inplace for rsync, otherwise you're creating filenames that will hash to one brick, then renaming them. The new filename will hash to a different brick and produce inefficiencies.
22:13 guizarea I'll check that...Thank you
22:14 jackdpeterson @joeJulian -- thanks, I'll give the inplace part a shot here. I've got one more interesting thing I'm noticing on the new (gluster 3/4 boxes -- CentOS 7): E [marker.c:2542:marker_removexattr_cbk] -  ... No data available occurred while creating symlinks ...
22:17 JoeJulian guizarea: I have no idea about extra network utilization with 3.6.1. Have you looked in logs to see if there's any clues there?
22:20 purpleidea joined #gluster
22:21 siel joined #gluster
22:23 guizarea I'm new to Gluster, our Gluster expert left the company last week, we (those who have hardly touched Gluster) are now managing the system
22:24 JoeJulian Ah yes, I'm aware of the change.
22:24 siel joined #gluster
22:24 siel joined #gluster
22:25 JoeJulian I believe the beer account has been passed on to you. Were you aware you now owe me several? ;)
22:25 guizarea :)
22:26 guizarea If you can help us become the expert that the other person was, several cases could be coming your way :)
22:26 purpleidea joined #gluster
22:27 guizarea We really are newbees at this
22:27 JoeJulian Step 1: http://joejulian.name/
22:28 n-st joined #gluster
22:28 JoeJulian Well... http://joejulian.name/blog/category/glusterfs/
22:31 sputnik13 joined #gluster
22:31 gildub joined #gluster
22:32 purpleidea joined #gluster
22:33 guizarea We have a lot of reading to do.  Thank you for the assist.
22:35 iPancreas joined #gluster
22:37 purpleidea joined #gluster
22:37 purpleidea joined #gluster
22:54 purpleidea joined #gluster
23:15 purpleidea joined #gluster
23:16 Pupeno joined #gluster
23:26 purpleidea joined #gluster
23:30 ildefonso joined #gluster
23:31 purpleidea joined #gluster
23:33 Pupeno_ joined #gluster
23:35 guizarea left #gluster
23:36 iPancreas joined #gluster
23:42 purpleidea joined #gluster
23:45 Staples84_ joined #gluster
23:47 purpleidea joined #gluster
23:58 purpleidea joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary