Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 markus_gl you want a replicate?
00:00 markus_gl over 2 servers?
00:00 markus_gl data is still there?
00:00 markus_gl we are fine
00:00 markus_gl you have to pick one brick as trusted
00:01 markus_gl on 01 you have to apt-get purge
00:02 markus_gl upgrade 02 to final 3.4
00:02 markus_gl do not use beta
00:03 markus_gl still there?
00:03 markus_gl i need to sleep
00:03 chlunde joined #gluster
00:05 markus_gl 2AM?
00:05 markus_gl german?
00:09 ipalaus joined #gluster
00:09 ipalaus joined #gluster
00:25 hagarth joined #gluster
00:47 fidevo joined #gluster
00:52 yinyin joined #gluster
01:13 MugginsM joined #gluster
01:16 markus__ joined #gluster
01:20 bala joined #gluster
01:30 fidevo joined #gluster
01:35 harish joined #gluster
01:49 yinyin joined #gluster
01:50 lpabon joined #gluster
01:50 lpabon joined #gluster
01:53 recidive joined #gluster
02:02 ipalaus joined #gluster
02:02 ipalaus joined #gluster
02:05 nickw joined #gluster
02:09 nickw hello. I found the rebalance function in 3.4 was not running as expected
02:09 Humble joined #gluster
02:11 nickw after rebalancing, it generated some zero-length files in the newly added bricks,
02:12 nickw what's more, some files can't be deleted in the volume mountpoint while others can
02:13 nickw but if I umount & mount the volume again, these files can be deleted.. quite strange
02:16 zaitcev joined #gluster
02:23 recidive joined #gluster
02:24 harish joined #gluster
02:48 lalatenduM joined #gluster
02:59 jag3773 joined #gluster
03:00 raghug joined #gluster
03:02 kkeithley joined #gluster
03:09 kshlm joined #gluster
03:19 MugginsM has anyone played with Linux-AIO on gluster on AWS?
03:22 * JoeJulian is back
03:25 saurabh joined #gluster
03:30 hagarth joined #gluster
03:32 JoeJulian "[08:03] <bstr> i *cannot* loose the data on the brick" It would take some very strange occurrences to make the data come loose from a hard drive. VIP-ire was right, though, in that you restore the uuid to /var/lib/glusterd/glusterd.info etc.
03:32 JoeJulian semiosis: PDX was busier than Summit! Holy cow.
03:34 JoeJulian VIP-ire: I wouldn't expect the libgfapi support in distro channels before 7.0.
03:39 bulde joined #gluster
03:39 JoeJulian semiosis: regarding bug 987624, that was color coded in someone's slides and I said to Eco, "That should be a feature request!" so he opened bugzilla. :D
03:39 glusterbot Bug http://goo.gl/rBmJ1W low, unspecified, ---, kaushal, NEW , Feature Request: Color code replica or distribute pairings output from gluster volume info
03:42 JoeJulian " <basicer> How does gluster determin if a peer is up?" From the client perspective, if the tcp connection is open and a communication hasn't timed out. Between peers, I'm not really sure.
03:43 JoeJulian Cry: "when I try to run automake, it blocks on a lock in the autom4ke file" I assume you're building something that resides on glusterfs. I'd check the client log and see if there are any clues there.
03:45 samppah JoeJulian: do you have more information about libgfapi support? i think there is a saw a bug report which mentioned 6.5 although it promises nothing...
03:45 bharata-rao joined #gluster
03:48 mmalesa joined #gluster
03:51 samppah err
03:51 samppah there is not saw...
03:52 samppah i think i saw... of that makes anymore sense :)
03:53 samppah damn mobile lag :)
03:55 JoeJulian samppah: No specific information, just the politics behind changing versions in EPEL. Since gluster will not be in upstream RHEL, it's not supposed to be in CentOS.
03:55 JoeJulian nickw: How do you know the zero-length files are not supposed to be there?
03:56 JoeJulian nickw: The need to remount, though, is sub-optimal. Please file a bug report on that. Include client and brick logs.
03:56 glusterbot http://goo.gl/UUuCq
04:01 mohankumar joined #gluster
04:05 krink joined #gluster
04:09 Technicool joined #gluster
04:14 nickw JoeJulian, I don't know that actually. but the rebalancing behavior between 3.3.x and 3.4 is different. In 3.3.2, after rebalancing, the newly added brick just contains the files that need to be there
04:15 nickw while 3.4 has some zero-length files there
04:15 JoeJulian Check those files ,,(extended attributes). There's probably some clue there.
04:15 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
04:16 sgowda joined #gluster
04:16 nickw let me check
04:18 nickw i got a linkto attr
04:19 nickw trusted.glusterfs.dht.linkto=xxx
04:19 JoeJulian That's what I expected.
04:19 nickw so that's a normal behavior?
04:19 JoeJulian Looks that way.
04:20 JoeJulian That linkto is telling you where the file actually is. Apparently something calculated the hash as pointing to the new brick(s) but since the file wasn't actually there, it created the linkto.
04:20 nickw yes I can understand that
04:20 JoeJulian Since hashing is supposed to be consistent, it does seem a little odd.
04:21 nickw let me check the rm thing
04:21 JoeJulian I'm guessing it happened during transition.
04:21 nickw the status shows complete
04:22 nickw just a few files there
04:24 recidive joined #gluster
04:24 nickw after deleting all the files in the volume mountpoint, the zero-length link files are still there in the new brick
04:25 shylesh joined #gluster
04:25 JoeJulian hmm, that does sound wrong.
04:26 nickw but when I issue a 'stat FILENAME_THAT_IS_REMOVED' within the mountpoint, the link file will go away
04:26 JoeJulian Oh, ok. Then I wouldn't worry about it.
04:26 nickw what's wrong here
04:27 sunilva84 joined #gluster
04:32 nickw thank you JoeJulian
04:32 JoeJulian You're welcome.
04:35 kkeithley_ FWIW, gluster in EPEL is 3.2.7 because we sihpped HekaFS in EPEL and that's the version of glusterfs that's required.  And it's not a secret, per se, anyone can see the bugzillas,  glusterfs will be added to RHEL 6.5 for qemu. This will be the RHS version and will be the client-side only RPM; glusterfs will be withdrawn from EPEL when that happens.
04:36 JoeJulian Oh, cool.
04:36 kkeithley_ And the community RPMs for glusterfs for RHEL, CentOS, etc., will continue to be available from download.gluster.org.
04:36 JoeJulian Hey, did you see the earlier bug about glusterd.service wanting glusterfsd.service?
04:36 kkeithley_ yes
04:37 kkeithley_ builds to fix that are happening now
04:37 CheRi_ joined #gluster
04:37 JoeJulian cool. I've been mostly disconnected for the last couple days.
04:38 satheesh joined #gluster
04:40 sunilva84 joined #gluster
04:40 [1]corni joined #gluster
04:42 shubhendu joined #gluster
04:50 ipalaus joined #gluster
04:51 Guest76148 joined #gluster
04:51 samppah kkeithley_: sounds good, thanks for the information (libgfapi)
04:56 dusmant joined #gluster
04:57 Humble joined #gluster
04:58 dusmant joined #gluster
04:59 raghu joined #gluster
05:05 tg2 joined #gluster
05:07 vpshastry joined #gluster
05:20 bala joined #gluster
05:33 Humble joined #gluster
05:33 fidevo joined #gluster
05:42 rjoseph joined #gluster
05:43 psharma joined #gluster
05:44 lalatenduM joined #gluster
05:45 lalatenduM joined #gluster
05:49 vigia joined #gluster
05:53 glusterbot New news from resolvedglusterbugs: [Bug 884280] distributed volume - rebalance doesn't finish - getdents stuck in loop <http://goo.gl/s4xvj>
06:22 StarBeast joined #gluster
06:22 SynchroM joined #gluster
06:27 bulde joined #gluster
06:27 Recruiter joined #gluster
06:30 bulde1 joined #gluster
06:38 shubhendu joined #gluster
06:41 guigui3 joined #gluster
06:45 dusmant joined #gluster
06:49 Jaap joined #gluster
06:49 Jaap left #gluster
06:50 ekuric joined #gluster
06:51 JaapHaagmans joined #gluster
06:56 asias joined #gluster
06:57 vshankar joined #gluster
07:01 ctria joined #gluster
07:02 satheesh joined #gluster
07:04 ricky-ticky joined #gluster
07:11 asias joined #gluster
07:13 VIP-ire hi
07:13 glusterbot VIP-ire: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:13 VIP-ire Looks like I have another problem since I've upgraded to Gluster 3.4
07:14 hybrid5122 joined #gluster
07:14 VIP-ire I had a simple setup running 3.3. Two nodes were both gluster server and client. 3 replicated volumes between the two nodes
07:14 VIP-ire I've upgraded to 3.4, everything went fine
07:15 VIP-ire and, I've decided to re-install both nodes, one at a time, to see how gluster reacts (I don't care about the data, this is a test setup, I just want to understand)
07:15 ngoswami joined #gluster
07:15 VIP-ire so, I've reinstalled one node. Re-created the bricks at the same place
07:16 Humble joined #gluster
07:16 VIP-ire done a gluster volume sync node1 all (node1 being the gluster node still alive)
07:16 dobber_ joined #gluster
07:16 VIP-ire I've rsync --xattrs to get the correct extended attributes on the bircks directory
07:17 VIP-ire and everything seems to be fine
07:17 VIP-ire except that now, when I reboot this reinstalled node, gluster volume are not started automatically
07:17 VIP-ire no glusterfsd process running
07:17 VIP-ire I have to manually run gluster volume start <volname> force
07:18 piotrektt joined #gluster
07:18 piotrektt joined #gluster
07:19 VIP-ire maybe I've done something tupid, but I haven't find a doc for this simple scenari: one node died, I wan't to reinstall it (it's not a replace-bricks operation, because the old bricks isn't available anymore. I just want to reinstall one node, and re-configure everything as it was before)
07:21 lubko hello everyone. I'm wondering whether there's any activity focused on improving GlusterFS documentation, widen its coverage of architectural topics to the point it would be complete enough to be readable like a book?
07:22 lubko i've had hard time looking for documentation on the protocol, management interface, actual use of xattrs, etc. though I've found some excellent information around in hekafs blogs or presentation slides by red-hatters
07:24 lubko therefore I'm wondering what would be a desirable think to do if I wished things were easier for newcomers to gluster; such as improving the wiki or maybe starting a collaborative project on producing a book, or something
07:26 harish joined #gluster
07:28 lalatenduM joined #gluster
07:36 hybrid512 joined #gluster
07:40 mmalesa joined #gluster
07:41 hybrid5121 joined #gluster
07:54 bulde joined #gluster
07:54 hagarth lubko: sounds like a great idea
07:55 hagarth lubko: we have started moving all documentation to markdown as a first step.
07:59 chirino joined #gluster
07:59 glusterbot New news from resolvedglusterbugs: [Bug 763046] portmapper functionality <http://goo.gl/oBaXT>
08:01 lubko hagarth: in the doc/ tree of the gluster source code, I presume. How does it relate to the wiki; which documentation is more appropriate in the source tree and which in the wiki?
08:06 skyw joined #gluster
08:09 mooperd joined #gluster
08:11 _pol joined #gluster
08:11 harish joined #gluster
08:13 StarBeast joined #gluster
08:16 hagarth lubko: the plan is to roll out the doc/ tree in the source code as html in gluster.org.
08:16 hagarth lubko: going forward the source tree would be the master copy.
08:18 lubko hagarth: that makes sense. thank you!
08:19 ujjain joined #gluster
08:19 hagarth lubko: you are welcome, feel free to send out your contributions there.
08:23 JaapHaagmans joined #gluster
08:23 bulde joined #gluster
08:29 mmalesa joined #gluster
08:30 JaapHaagmans Does anyone have experience auto scaling Gluster server instances in AWS? I wouldn't necessarily be looking for a way to scale out (although it would be mighty nice), but more for a way to replace unhealthy instances. Our main target will be reliability, so I'm thinking about launching a server instance in two AZs, but I would like to know if there
08:30 JaapHaagmans 's a way to automatically replace a server instance. I could automatically install and run gluster, but how to add the new instance to the pool if you don't know the IP address or hostname of the other servers (because they might as well have been replaced in the past)?
08:35 hybrid5121 joined #gluster
08:40 vpshastry joined #gluster
08:40 dusmant joined #gluster
08:42 shubhendu joined #gluster
08:42 lubko d
08:42 lubko oops
08:47 mick271 joined #gluster
08:47 ndevos JaapHaagmans: some people in here do (not me), and I think most use something like dyndns.org to get fixed hostnames which solves at least a part of the problem
08:50 mick272 joined #gluster
08:50 hybrid5121 joined #gluster
08:51 JaapHaagmans ndevos, that might be a way to go, thanks!
08:53 JaapHaagmans Automatic upscaling would be very hard though, right?
08:54 ndevos upscaling (adding bricks to a volume) is pretty easy, it's downscaling that I would see more as a problem
08:55 JaapHaagmans I meant upscaling in the auto scaling sense of the word (adding instances)
08:56 ndevos well, a new server can only get added to the trusted pool from within the pool, it can not join by itself, that needs some solving too
08:57 ndevos other than that, I do not see much of an difficulty
09:01 shubhendu joined #gluster
09:04 toad joined #gluster
09:05 dusmant joined #gluster
09:13 hybrid512 joined #gluster
09:15 Humble joined #gluster
09:17 hybrid5121 joined #gluster
09:18 JaapHaagmans joined #gluster
09:23 vpshastry joined #gluster
09:24 bala joined #gluster
09:24 ccha semiosis: My friend use your src package and recompile for lucid,.... he removed "include /usr/share/cdbs/1/class/python-module.mk" from debian/rules and remove version build depends for cdbs and python-all-dev
09:24 ccha does that be ok ?
09:25 ccha compilation success and testing right now
09:26 jag3773 joined #gluster
09:27 rjoseph joined #gluster
09:31 mgebbe_ joined #gluster
09:55 bulde joined #gluster
09:55 hybrid5121 joined #gluster
10:06 hybrid5123 joined #gluster
10:08 sgowda joined #gluster
10:20 StarBeast joined #gluster
10:23 sunilva84 joined #gluster
10:25 Humble joined #gluster
10:39 shubhendu joined #gluster
10:43 sunilva84 joined #gluster
10:45 plarsen joined #gluster
10:50 spider_fingers joined #gluster
10:52 sgowda joined #gluster
10:57 Humble joined #gluster
11:01 failshell joined #gluster
11:13 raghug joined #gluster
11:15 sasundar joined #gluster
11:15 dobber joined #gluster
11:34 eightyeight joined #gluster
11:36 eightyeight joined #gluster
11:37 markus_gl I have a lot of issues with having /home glustered via replica
11:37 markus_gl 32bit Applications do not start because fstat detects wrong datatypes
11:38 hagarth markus_gl: can you please open a bug for that one?
11:39 markus_gl setting than that fusemount to inode32, will still cause issues with firefox finding a corrupted places.sqlite
11:39 markus_gl i am collecting informations at the moment
11:39 markus_gl runing newly installed centos 5.9
11:40 markus_gl running as a login-server with apps like mozilla ones, libreoffice and open office
11:41 markus_gl also nx is there and sadly has problems if using 32bit inodes
11:41 markus_gl nx=nomachine
11:42 markus_gl hagarth i am running 3.4 final
11:42 markus_gl connecting a centos5.9 and a centos6.4 server
11:42 edward1 joined #gluster
11:43 hagarth markus_gl: ok
11:43 markus_gl I think a replica of homes with 120 Users over a 1Gbit network is overkill, right?
11:44 vpshastry1 joined #gluster
11:44 hagarth that does seem like.
11:44 markus_gl but anyway, i am willing to file a bugreport
11:44 glusterbot http://goo.gl/UUuCq
11:44 markus_gl can you guide me to file a usefull one?
11:46 skyw joined #gluster
11:46 samppah hagarth: is there any recommendations about setting background-qlen mount optio?
11:47 bala joined #gluster
11:47 hagarth markus_gl: presenting a description of the problem and providing client log files from /var/log/glusterfs will help.
11:47 hagarth samppah: setting to 256 has been found to be useful in some internal tests.
11:48 samppah hagarth: hmm.. any downsides if it's much higher than that?
11:48 hagarth samppah: i have seen that being set to 512 also.
11:48 Norky joined #gluster
11:49 samppah umm well.. i have it currently set to 65535...
11:49 samppah just wondering if that could cause problems for rebalance
11:50 shubhendu joined #gluster
11:50 hagarth samppah: that seems to be too high but rebalance should not be affected.
11:50 samppah i'm also seeing other weird stuff happening i haven't seen before and thinking about lowering it
11:50 samppah although i'm  not completely sure if other stuff is caused by glusterfs or something else
11:50 samppah hagarth: okay
11:51 hybrid5121 joined #gluster
11:51 hagarth samppah: could you report your problems in that bug report?
11:55 samppah hagarth: i think this is different issue and i'm not sure if this is related to glusterfs at all.. just a thought if setting background-qlen too high causes both issues
11:57 hagarth samppah: can you reduce it and check?
11:57 markus_gl @hagarth: I have also Problems with nfs-mount, big filetransfers are hanging after some time and server-load is rising linear
12:00 samppah hagarth: i'll do that
12:00 hagarth markus_gl: do you have write-behind enabled in the gluster nfs server volume file?
12:00 markus_gl I changed nothing
12:00 markus_gl outofthebox setup
12:01 markus_gl tell me more please
12:03 manik joined #gluster
12:06 hagarth markus_gl: volume set <volname> performance.nfs.write-behind on
12:08 markus_gl ok, thanks, and howto mount? (maybe you telling me more news ;-)
12:10 hagarth markus_gl: no changes in mount options, it is only a server side change.
12:11 markus_gl localhost:/centos5_pmaster on /glusternfs type nfs (rw,addr=127.0.0.1)
12:12 markus_gl gonna start a big transfer now, so pray for me or wish me luck
12:14 hagarth markus_gl: good luck.
12:16 markus_gl https://bugzilla.redhat.com/show_bug.cgi?id=988367
12:16 glusterbot <http://goo.gl/dBAVht> (at bugzilla.redhat.com)
12:16 glusterbot Bug 988367: high, unspecified, ---, csaba, NEW , 32 bit Applications not working proberly despite mount option enable-ino32
12:21 ricky-ticky joined #gluster
12:22 ccha I have a replication of 2 server. I added 1 new server with add-brick replica 3. now I have "Number of Bricks: 1 x 3 = 3"
12:22 ccha what what I should do ? gluster volume rebalance ?
12:22 markus_gl one brick per server = 3
12:23 ccha I got this error if I rebalance
12:23 ccha Volume VOL_REPL1 is not a distribute volume or contains only 1 brick.
12:24 markus_gl with 2 bricks per server = 2 x 3 = 6
12:24 ccha I can't do replication on 3 servers ?
12:24 ccha to have 3 servers with same datas
12:25 markus_gl you have 3 identical bricks trough replica 3
12:25 ccha yes
12:26 ccha oh maybe self heal will copy datas to my new replica server ?
12:26 markus_gl what's your problem?
12:26 markus_gl using iftop showing traffic?
12:27 markus_gl gluster volume heal VOL_REPL1 info
12:27 markus_gl what output?
12:28 ccha I have some data in the volume bedore I added the new replica
12:28 ccha I want to get this new replica get these data too
12:28 ccha I just add new file in the volume. this file is on the new replica
12:29 ccha but there are not the old files
12:29 markus_gl you added the new brick with data?
12:30 ccha new brick was empty
12:30 markus_gl good
12:30 ccha but the volume wasn't
12:30 markus_gl please invoke: gluster volume heal VOL_REPL1 info
12:30 ccha Number of entries: 0
12:30 ccha for the 3 replica
12:30 markus_gl than you are fine
12:30 markus_gl is the data replicated?
12:31 ccha new data yes
12:31 ccha but not old ones
12:31 ccha oh maybe I shloud read these data
12:31 markus_gl looks like a bug
12:31 ccha to trigger the replicated to the new replica
12:32 markus_gl find $gluster-mount -noleaf -print0 | xargs --null stat >/dev/null
12:32 hybrid512 joined #gluster
12:32 markus_gl maybe it is a layer 8 problem?
12:33 markus_gl lately I look for data with an user-account which was not permitted
12:33 dusmant joined #gluster
12:34 ccha yes that is
12:34 markus_gl jippieee! nfs is working with 75MB/s an is stable! thank you so much!
12:34 ccha when I client access to these old data
12:34 ccha new replica healing
12:35 ccha so rebalance is only for distributed ?
12:35 markus_gl hagarth: your turn!
12:37 hybrid512 joined #gluster
12:38 partner was stopped rebalance supposed to continue from the place it got to or does it start from beginning (analyzing files)? i recall reading the first one but seeing the stats i'm not sure..
12:39 partner due to known open files issue i have to start and stop it semi-frequently..
12:40 hybrid5121 joined #gluster
12:40 alexey__ joined #gluster
12:40 markus_gl I need more info to performance.nfs.write-behind.
12:40 manik joined #gluster
12:40 markus_gl It is not well documented
12:49 mick271 joined #gluster
12:49 StarBeast joined #gluster
12:49 Humble joined #gluster
12:52 deepakcs joined #gluster
12:53 shylesh joined #gluster
12:53 mooperd joined #gluster
12:54 hybrid5121 joined #gluster
13:18 sasundar joined #gluster
13:19 dewey joined #gluster
13:23 Humble joined #gluster
13:32 mooperd joined #gluster
13:45 jag3773 joined #gluster
13:49 recidive joined #gluster
13:51 rwheeler joined #gluster
13:52 rjoseph joined #gluster
13:53 bala joined #gluster
13:54 jebba joined #gluster
13:54 Humble joined #gluster
13:56 skyw joined #gluster
14:03 wushudoin joined #gluster
14:04 mhallock joined #gluster
14:06 dbruhn anyone ever used the cisco SFS-HCA-320-A1 cards for their IB setup?
14:08 TuxedoMan joined #gluster
14:10 rwheeler joined #gluster
14:13 Humble joined #gluster
14:16 krink joined #gluster
14:18 aliguori joined #gluster
14:20 bugs_ joined #gluster
14:21 skyw joined #gluster
14:31 daMaestro joined #gluster
14:33 kaptk2 joined #gluster
14:40 shylesh joined #gluster
14:47 sprachgenerator joined #gluster
14:51 spider_fingers left #gluster
14:57 bdperkin joined #gluster
14:58 bdperkin joined #gluster
15:04 mmalesa joined #gluster
15:06 hagarth joined #gluster
15:07 lalatenduM joined #gluster
15:08 dhsmith joined #gluster
15:09 dbruhn anyone seen jclift this morning?
15:16 puebele joined #gluster
15:21 bala joined #gluster
15:22 Humble joined #gluster
15:26 puebele2 joined #gluster
15:39 semiosis @seen jclift
15:39 glusterbot semiosis: jclift was last seen in #gluster 1 week, 2 days, 19 hours, and 39 seconds ago: <jclift> dbruhn: Ahhh, this is one killer server: http://goo.gl/boCLW
15:40 jag3773 joined #gluster
15:42 dbruhn thanks semiosis
15:42 semiosis yw
15:42 hagarth @seen me
15:42 glusterbot hagarth: I have not seen me.
15:57 wushudoin left #gluster
16:11 TuxedoMan joined #gluster
16:15 ipalaus joined #gluster
16:16 ipalaus joined #gluster
16:19 sprachgenerator joined #gluster
16:19 zaitcev joined #gluster
16:28 hybrid5122 joined #gluster
16:30 avati left #gluster
16:30 avati joined #gluster
16:30 avati :O
16:31 TuxedoMan joined #gluster
16:32 semiosis :O
16:33 avati hey semiosis !
16:33 semiosis how's it going?
16:34 avati too many meetings, bad for health
16:34 semiosis i know the feeling
16:38 Humble joined #gluster
16:40 cyberbootje joined #gluster
16:43 lpabon joined #gluster
17:20 zhuchkov joined #gluster
17:25 vpshastry joined #gluster
17:29 DWSR joined #gluster
17:31 lalatenduM joined #gluster
17:36 Humble joined #gluster
17:38 \_pol joined #gluster
17:49 vpshastry left #gluster
17:50 neofob left #gluster
17:51 \_pol joined #gluster
17:54 xdexter joined #gluster
17:54 Humble joined #gluster
17:54 xdexter If I do a rsync between two servers, it takes about 30 seconds. If I create a volume with GlusterFS replicated between these 2 servers, and ride the shared volume on a third server and make a rsync that volume in 5 minutes, is that correct?
18:05 semiosis you might do better with --inplace and --whole-file on the rsync
18:09 xdexter semiosis, rsync -atprolv --inplace --whole-file src dst
18:09 xdexter this?
18:28 Recruiter joined #gluster
18:29 _pol joined #gluster
18:54 rcheleguini joined #gluster
18:54 TuxedoMan joined #gluster
18:57 ipalaus joined #gluster
18:57 ipalaus joined #gluster
18:59 duerF joined #gluster
19:00 Humble joined #gluster
19:09 Avatar[01] joined #gluster
19:15 Technicool joined #gluster
19:18 fidevo joined #gluster
19:18 soukihei joined #gluster
19:18 baoboa joined #gluster
19:18 MediaSmurf joined #gluster
19:18 NuxRo joined #gluster
19:18 yosafbridge` joined #gluster
19:18 lanning joined #gluster
19:18 haidz joined #gluster
19:18 Bluefoxicy joined #gluster
19:18 a2 joined #gluster
19:18 tru_tru joined #gluster
19:18 Ramereth joined #gluster
19:18 atrius` joined #gluster
19:18 stopbit joined #gluster
19:18 ccha joined #gluster
19:18 sysconfi- joined #gluster
19:18 Gugge joined #gluster
19:18 Goatbert joined #gluster
19:18 irk joined #gluster
19:21 sprachgenerator joined #gluster
19:21 hagarth joined #gluster
19:21 mhallock joined #gluster
19:21 eightyeight joined #gluster
19:21 chlunde joined #gluster
19:21 nightwalk joined #gluster
19:21 Kins joined #gluster
19:21 mtanner_ joined #gluster
19:21 atrius joined #gluster
19:21 phox joined #gluster
19:21 mibby| joined #gluster
19:22 Cry joined #gluster
19:22 _pol joined #gluster
19:23 _pol joined #gluster
19:24 _pol joined #gluster
19:24 Goatbert joined #gluster
19:25 tqrst let's say I setup rrdns such that the gluster hostname resolves to all my gluster servers. I "mount gluster:/myvol -t glusterfs /mnt/myvol". By bad luck, this new client ends up picking the wrong ip from this list (e.g. a server that's down for maintenance). Will the client a) try the next ip in the list for the hostname or b) fail and die?
19:27 tqrst all signs point to b), which makes me wonder why I would even bother setting rrdns up
19:29 _pol_ joined #gluster
19:29 Cenbe joined #gluster
19:31 puebele1 joined #gluster
19:34 codex joined #gluster
19:35 rcheleguini joined #gluster
19:35 jag3773 joined #gluster
19:35 jmeeuwen joined #gluster
19:35 basicer joined #gluster
19:35 georgeh|workstat joined #gluster
19:35 ultrabizweb joined #gluster
19:35 JusHal joined #gluster
19:39 Humble joined #gluster
19:41 recidive joined #gluster
19:47 Dave2 joined #gluster
19:55 Technicool joined #gluster
20:00 rwheeler joined #gluster
20:16 xdexter joined #gluster
20:16 xdexter I added my servers through the host, and now one of them has changed their ip and reconfigured in / etc / hosts, put in Gluster continues with the old ip, how do I fix it?
20:22 xdexter someone help me ?
20:25 Technicool joined #gluster
20:36 mick271 joined #gluster
20:38 mick271 hello guys
20:38 mick271 as of today, what is best for small smiles, the gluster client or a nfs connection
20:38 mick271 small files*
20:45 TuxedoMan Both handle them well... Just depends what you're trying to accomplish, really.
20:46 dhsmith_ joined #gluster
20:56 badone joined #gluster
21:11 chirino joined #gluster
21:32 StarBeas_ joined #gluster
21:37 mmalesa joined #gluster
21:46 kedmison joined #gluster
22:20 dhsmith joined #gluster
22:24 recidive_ joined #gluster
22:35 _ilbot joined #gluster
22:35 Topic for #gluster is now Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
22:35 penglish joined #gluster
22:36 jiffe98 joined #gluster
22:37 _pol joined #gluster
22:38 zykure joined #gluster
22:45 jiqiren joined #gluster
22:46 jiku joined #gluster
22:57 mmalesa joined #gluster
23:02 asias joined #gluster
23:08 jiku joined #gluster
23:42 dhsmith joined #gluster
23:45 markus joined #gluster
23:54 badone joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary