Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 jmeeuwen joined #gluster
00:44 duerF joined #gluster
00:54 vpshastry joined #gluster
00:58 bala joined #gluster
01:02 chirino joined #gluster
01:14 lpabon_ joined #gluster
01:14 asias joined #gluster
01:24 \_pol joined #gluster
01:29 kevein joined #gluster
01:36 bala joined #gluster
01:50 jebba joined #gluster
01:50 jmeeuwen joined #gluster
01:55 badone_ joined #gluster
01:56 recidive joined #gluster
02:09 yongtaof joined #gluster
02:12 yongtaof Hi when I run gluster volume quota limit-usage command the glusterd process crashes.
02:12 yongtaof Anyone know this issue?
02:12 yongtaof #4  0x0000003182214365 in data_destroy (data=0x7f26dd329b24) at dict.c:135
02:25 yongtaof p *unref_data
02:25 yongtaof $3 = {is_static = 0 '\000', is_const = 0 '\000', is_stdalloc = 0 '\000', len = 7, data = 0x30c98a0 "/lib64/libgcc_s.so.1", refcount = 0, lock = 1}
02:26 yongtaof seems the memory info of the dict is not correct len = 7 but data =  "/lib64/libgcc_s.so.1"?
02:26 hagarth joined #gluster
02:32 bharata joined #gluster
02:33 chirino joined #gluster
02:33 kevein joined #gluster
02:39 recidive joined #gluster
02:48 GabrieleV joined #gluster
02:51 harish joined #gluster
02:54 raghug joined #gluster
03:05 sgowda joined #gluster
03:09 kshlm joined #gluster
03:09 jag3773 joined #gluster
03:28 shubhendu joined #gluster
03:32 edong23 joined #gluster
03:48 shylesh joined #gluster
03:52 kevein joined #gluster
03:55 dusmant joined #gluster
03:57 badone_ joined #gluster
04:00 harish joined #gluster
04:01 raghug joined #gluster
04:02 itisravi joined #gluster
04:03 mohankumar joined #gluster
04:07 ndarshan joined #gluster
04:15 vpshastry joined #gluster
04:24 CheRi_ joined #gluster
04:32 hagarth joined #gluster
04:35 dhsmith joined #gluster
04:38 shruti joined #gluster
04:38 kevein joined #gluster
04:49 vpshastry2 joined #gluster
05:11 lalatenduM joined #gluster
05:16 vimal joined #gluster
05:26 deepakcs joined #gluster
05:26 yongtaof ?
05:27 toad yongtaof, lot of quotas ?
05:27 raghu joined #gluster
05:28 yongtaof p *this->members_list->next->next->next->value
05:28 yongtaof $6 = {is_static = 1 '\001', is_const = 0 '\000', is_stdalloc = 0 '\000', len = 8, data = 0x2fc7400 "/:300TB", refcount = 1, lock = 1}
05:29 aravindavk joined #gluster
05:29 toad jeez id love to have a quota of 300TB somewhere :D
05:29 yongtaof _dict_set unref previous quota in dict and crashes
05:30 toad well no idea anyway
05:30 yongtaof seems the data to be unreferenced is not valid
05:30 yongtaof $3 = {is_static = 0 '\000', is_const = 0 '\000', is_stdalloc = 0 '\000', len = 7, data = 0x30c98a0 "/lib64/libgcc_s.so.1", refcount = 0, lock = 1}
05:30 toad even though i need to continue my work on the quotas, they seem to suck presently :)
05:30 yongtaof len = 7
05:30 yongtaof but data is "/lib64/libgcc_s.so.1"
05:30 toad yongtaof, weird
05:31 yongtaof hard to reproduce in test env
05:31 toad whats the content of your quota.limit-set or whatever its called ?
05:31 dbruhn joined #gluster
05:31 yongtaof quota limit-usage / 300TB
05:32 yongtaof so the data is "/:300TB"
05:32 yongtaof p *this->members_list->next->next->next->value
05:32 yongtaof $6 = {is_static = 1 '\001', is_const = 0 '\000', is_stdalloc = 0 '\000', len = 8, data = 0x2fc7400 "/:300TB", refcount = 1, lock = 1}
05:33 yongtaof the crash happens because the _dict_set unref previous quota info
05:33 yongtaof that info is not valid
05:33 bulde joined #gluster
05:33 yongtaof p *unref_data
05:33 yongtaof $7 = {is_static = 0 '\000', is_const = 0 '\000', is_stdalloc = 0 '\000', len = 7, data = 0x30c98a0 "/lib64/libgcc_s.so.1", refcount = 0, lock = 1}
05:34 yongtaof len should be 21 not len = 7 ?
05:34 vijaykumar joined #gluster
05:36 bulde1 joined #gluster
05:36 vijaykumar joined #gluster
05:37 lalatenduM joined #gluster
05:40 vpshastry joined #gluster
05:46 bala joined #gluster
05:47 ndarshan joined #gluster
05:49 yongtaof thank you toad
05:50 yongtaof do you have any suggestions?
05:50 toad nope
05:50 toad im myself struggling to understand parts of quota system
05:51 yongtaof p *key@20
05:51 yongtaof $9 = "features.limit-usage"
05:52 yongtaof before set new quota the dict unref previous value(which is not valid)
05:52 yongtaof more info about this issue
05:52 yongtaof is
05:52 yongtaof the crash happens on the newly added bricks
05:54 yongtaof p *unref_data
05:54 yongtaof $10 = {is_static = 0 '\000', is_const = 0 '\000', is_stdalloc = 0 '\000', len = 7, data = 0x30c98a0 "/lib64/libgcc_s.so.1", refcount = 0, lock = 1}
05:54 yongtaof p *unref_data->data@21
05:54 yongtaof $12 = "/lib64/libgcc_s.so.1"
05:55 yongtaof why "/lib64/libgcc_s.so.1" is set to features.limit-usage previously?
05:56 yongtaof len = 7 is also incorrect
05:56 rastar joined #gluster
05:56 toad no idea dude
05:56 toad maybe its using a part of memory not memset()ed to \0
05:56 yongtaof maybe
05:57 yongtaof but the data is always  "/lib64/libgcc_s.so.1" not a random value every time crash happens
05:58 yongtaof the good thing is only glusterd process crashes not glusterfsd
05:58 aravindavk joined #gluster
05:59 yongtaof so every time it crashes I just start it again
06:03 toad you have to check where its set in the memory before then
06:03 toad maybe its just another entry in the volume file
06:03 rjoseph joined #gluster
06:04 yongtaof ok I'll check the volume file now
06:15 yongtaof not sure what's the problem but it not happen again on my production server too
06:15 yongtaof thank you toad
06:15 toad for what ? i didnt solve your issue :)
06:16 yongtaof for your help there's nobody reply
06:16 yongtaof :)
06:16 _BuBU I just migrated from 3.3.2 to 3.4
06:17 _BuBU and I can no longer mount nfs mounts :(
06:18 _BuBU mount commands hangs until timeout
06:23 kanagaraj joined #gluster
06:29 Recruiter joined #gluster
06:30 dobber_ joined #gluster
06:33 ipalaus joined #gluster
06:33 ipalaus joined #gluster
06:38 ngoswami joined #gluster
06:38 _BuBU also seems I've some more issues :(
06:38 _BuBU ~> ls -l /data/svn/rslant/db/revprops/0/
06:38 _BuBU ls: cannot access /data/svn/rslant/db/revprops/0/0: Input/output error
06:38 _BuBU total 2
06:38 _BuBU -????????? ? ?     ?       ?            ? 0
06:38 _BuBU -r--r--r-- 1 10000 10000 115 Aug  7  2012 1
06:38 _BuBU this is a glusterfs mount point
06:40 glusterbot New news from resolvedglusterbugs: [Bug 953887] [RHEV-RHS]: VM moved to paused status due to unknown storage error while self heal and rebalance was in progress <http://goo.gl/tw8oW>
06:44 vikumar joined #gluster
06:46 psharma joined #gluster
06:49 ctria joined #gluster
06:54 ricky-ticky joined #gluster
06:55 SynchroM joined #gluster
06:55 ekuric joined #gluster
06:56 vshankar joined #gluster
06:56 mooperd joined #gluster
06:56 raghug joined #gluster
07:03 sas joined #gluster
07:05 raghug joined #gluster
07:10 glusterbot New news from resolvedglusterbugs: [Bug 927146] AFR changelog vs data ordering is not durable <http://goo.gl/jfrtO>
07:10 hybrid5121 joined #gluster
07:11 saurabh joined #gluster
07:20 bulde joined #gluster
07:24 msvbhat joined #gluster
07:39 chirino joined #gluster
07:55 anush1 joined #gluster
07:57 puebele joined #gluster
07:58 asias joined #gluster
08:02 mrEriksson Hello
08:02 glusterbot mrEriksson: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:04 mrEriksson I think I've messed up my cluster volume somehow. Currently, it consist of only one single brick. Using du from the client says that about 55GB is in use, (which is correct) but looking at the space used on the actual brick, it is about 113GB. Anyone got any pointers on what to look for?
08:04 mrEriksson (The brick partition is dedicated to being a brick, so there is nothing else stored there)
08:06 mrEriksson Also, when looking at the brick, there are three files with a total of 55GB stored there, but nothing else except for the .glusterfs-directory, which in turn du says is 113GB
08:09 _BuBU I have a lot of small files using about 3To space disk, with a replicata of 2 glusterfs nodes, what should be best values for cluster.background-self-heal-count ? atm this is 20 but I find that there is too much delta between both replicas :(
08:11 ndevos mrEriksson: the .glusterfs directory contains hard-links to the actual files on the bricks, that could maybe cause 'du' to count the disk-usage twice
08:12 mrEriksson ndevos: No, when looking at the brick, I just did ls and calculated the entire size by hand
08:13 mrEriksson Also, du seems to figure out that there are hardlinks too :) du for the whole brick also gives 113G
08:15 mrEriksson Hmm, I recently reinstalled this server and had a backup of the brick which I used to restore the cluster volume, worked just fine. But perhaps this has resulted in hardlinks in .glusterfs that does nog have an actual reference in the brick root?
08:15 ndevos mrEriksson: it is pretty easy to check, 'getfattr -m. -ehex -d /path/to/file/on/brick' will give you the gfid for that file, the .glusterfs directory contains hard-links for each gfid
08:15 vpshastry1 joined #gluster
08:15 puebele joined #gluster
08:16 ndevos mrEriksson: maybe your backup/restore does not take hard-links in account?
08:16 Norky joined #gluster
08:17 mrEriksson Hmm, I don't have getfattr on this machine it seems :-)
08:17 ndevos mrEriksson: so, each hard-link under the .glusterfs directory, should have the same inode# (ls -li) as the usable filename on the brick
08:17 mrEriksson ndevos: Perhaps.. Is there any way to get gluster to sort this out?
08:18 ndevos not sure, it is the other way around of how healing works (create a hardlink from the .glusterfs directory to the brick)
08:19 ndevos mrEriksson: you could do that manually though, for 3 files thats doable :)
08:19 mrEriksson Well, that would probably work too, since that would give me valid entries on the gluster client to delete
08:19 mibby joined #gluster
08:20 mrEriksson I've been looking in the .glusterfs directory, seems to be lots of stuff there, do I dare to touch it without destroying everything? :)
08:21 ndevos not really, you only want to delete the stale hard-links
08:21 ndevos and you can find those based on the gfid with the getfattr command
08:23 mrEriksson So the .glusterfs directory contains nothing but hardlinks to the actual files stored on the brick?
08:23 ndevos there are some other files as well, but that depends on the type and features of the volume
08:24 ndevos and, directories on the brick have a symlink named after their gfid, pretty similar to the hardlink for files
08:27 mrEriksson Hmm, my glusterfs-directory contains 13 other directories, which in turn contains other directories, how would I know what to remove?
08:28 ndevos mrEriksson: that is correct, the gfid used as symlink/hardlink results in a directory <1st-gfid-part>/<2nd-gfid-part>/<full-gfid>
08:29 mrEriksson Ah, got it :)
08:31 mrEriksson So, really, I would just figure out which of those directories contains files, look for files that doesn't have a counterpart in the brick root, and remove them?
08:33 ndevos yes, after you calculate the size of them and are sure it matches all up
08:33 mrEriksson :P
08:33 mrEriksson I'll give it a try, please be prepared for some crying in a couple of minutes :-)
08:34 ahatfiel joined #gluster
08:35 toad http://www.bpaste.net/show/56QQU29PQ0vj2kKWG0uI/ jesus christ the number of loops
08:35 glusterbot <http://goo.gl/y1CuOt> (at www.bpaste.net)
08:35 toad maybe we should replace linked lists by a dict or something thats more like an hash table
08:37 shireesh joined #gluster
08:37 satheesh joined #gluster
08:39 bala1 joined #gluster
08:40 ahatfiel joined #gluster
08:40 ahatfiel left #gluster
08:44 mrEriksson ndevos: This actually worked quite well
08:44 vpshastry1 joined #gluster
08:45 ndevos mrEriksson: cool!
08:45 mrEriksson Got one strange thing though
08:45 mrEriksson On the gluster client, du still says 55G
08:46 mrEriksson but du for the .glusterfs-dir says 62G
08:46 kanagaraj joined #gluster
08:46 ndevos maybe you can figure out which file(s) take up that 7GB?
08:46 mrEriksson Already done
08:47 mrEriksson That's the strange part
08:47 zetheroo joined #gluster
08:47 shubhendu joined #gluster
08:47 zetheroo I am trying to probe a host and am getting the following:
08:47 mrEriksson Only one file in that directory, du says 51G, ls says 43G
08:47 zetheroo Probe unsuccessful
08:47 zetheroo Probe returned with unknown errno 107
08:48 dusmant joined #gluster
08:48 zetheroo both hosts can ping each other by IP and by hostname
08:48 sas zetheroo, check for iptables or firewall
08:49 ndevos mrEriksson: 1000 bytes per kb vs 1024 bytes?
08:49 zetheroo sas: this is the second setup I am doing in identical conditions ... and with the first setup I did not touch the iptables nor the firewall
08:49 * ndevos did not try to calculate those GBs back and forth though, just guessing
08:50 mrEriksson ndevos: No, should get the same effect on the other files too if that was the case. I already went down that road :-)
08:50 zetheroo is there such a thing as unprobing or de-probing a peer?
08:51 sas zetheroo, gluster peer detach
08:51 zetheroo ok
08:51 mrEriksson Ohwell, I guess there is a perfectly valid reason for this somewhere :-)
08:51 aravindavk joined #gluster
08:51 mrEriksson ndevos: Thanks alot for your help! I appreciate it!
08:52 mrEriksson (Always fun to learn a bit more about the inner works of stuff too)
08:54 shruti joined #gluster
08:56 ndevos mrEriksson: you're welcome, glad at least the main question got resolved :)
08:56 mohankumar joined #gluster
09:00 zetheroo when I do "gluster peer status" I get "No peers present" ... so it's not part of any gluster atm ....
09:01 sas zetheroo, yes
09:01 zetheroo does it make a difference from which of the 2 hosts I am probing from?
09:02 sas zetheroo, no
09:04 zetheroo ok so I have these 2 hosts, Host A and Host B .... I have created a gluster replica 2 on Host A ... now I am trying to get Host A to peer up with Host B ... Host B already has a brick ready to roll ....
09:04 zetheroo Both Hosts A and B have each others IP address and hostname in the /etc/hosts file
09:05 zetheroo From Host A I do "gluster peer probe host2" and get the "Probe returned with unknown errno 107" message
09:06 zetheroo from Host B I do "gluster peer probe host1" and it just sits there .....
09:10 zetheroo if I cancel with CTRL+C and then do "gluster peer probe host1" again it says: Probe on host neptune port 24007 already in peer list
09:10 zetheroo so then I perform "gluster peer status" which shows me this:
09:10 zetheroo Number of Peers: 1
09:10 zetheroo Hostname: neptune
09:10 zetheroo Uuid: 00000000-0000-0000-0000-000000000000
09:10 zetheroo State: Establishing Connection (Disconnected)
09:11 vpshastry joined #gluster
09:15 lalatenduM joined #gluster
09:17 vimal joined #gluster
09:19 hagarth joined #gluster
09:21 kanagaraj joined #gluster
09:25 guigui1 joined #gluster
09:25 lalatenduM joined #gluster
09:34 zetheroo so if both hosts can communicate to each other why can't they peer up?
09:43 Norky compare /var/lib/glusterd/glusterd.info on both machines
09:45 zetheroo Norky: you talking to me?
09:45 Norky yes
09:46 toad is it hard to ask gluster to replicate a new  config file from one server to another ?
09:48 zetheroo Norky: host B has no glusterd directory in /var/lib ....
09:48 Norky that's odd...
09:49 zetheroo agree
09:49 Norky what distro/version/gluster version?
09:49 zetheroo Ubuntu 12.04 LTS ...
09:49 zetheroo glusterfs 3.2.5
09:50 Norky on both machines?
09:51 zetheroo that is on host 2
09:51 dusmant joined #gluster
09:51 zetheroo both hosts have Ubuntu 12.04
09:51 zetheroo for some reason when I do gluster -V on host 1 it doesn't work
09:51 zetheroo unrecognized word: -V (position 0)
09:52 Norky query the version with apt-get/dpkg
09:52 Norky I think you  are running glusterfs 3.3 on host A
09:53 zetheroo yes, I am sure I am ...
09:54 Norky you are sure you are running glusterfs 3.3 on host A?
09:54 Norky that wont work. The servers shoudl be at the same version.
09:57 Norky http://vbellur.wordpress.com/2012/​05/31/upgrading-to-glusterfs-3-3/
09:57 glusterbot <http://goo.gl/qOiO7> (at vbellur.wordpress.com)
09:59 Norky 3.2 and 3.3 are quite different
09:59 kanagaraj joined #gluster
10:01 bala joined #gluster
10:01 zetheroo ok ... updating host B now
10:02 Norky note that 3.4 is out now. IT's possible to mismatch 3.3 and 3.4 - not recommended though
10:02 shubhendu joined #gluster
10:02 aravindavk joined #gluster
10:09 zetheroo ok with glusterfs 3.3.2 on both hosts probing works with no issue
10:09 zetheroo thanks
10:10 Norky no worries
10:10 Norky is this a new glusterfs setup?
10:11 chirino joined #gluster
10:11 Norky if you're setting up glusterfs for the first time, do consider v3.4
10:13 CheRi joined #gluster
10:20 toad anyone has any idea on the quota size limit is passed onto the client ?
10:20 toad on your client you do a lgetxattr() on trusted.limit.list but the value is different than the one returned on the server
10:21 toad well i have no idea how it works client side, its probably using the same long configuration string features.limit-usage
10:21 hagarth toad: on the client, it is the aggregated value from all bricks/servers.
10:22 zetheroo Norky: it's our second gluster setup ... the PPA we are using does not seem to have 3.4 yet ...
10:23 Norky zetheroo, you could try using: http://download.gluster.org/pub/gluster​/glusterfs/LATEST/Ubuntu/Ubuntu.README
10:23 glusterbot <http://goo.gl/UOCkBA> (at download.gluster.org)
10:23 zetheroo ok thanks
10:23 hagarth toad: as I told you a few days earlier, quota is being re-vamped. You can find more activity on this gerrit project: http://review.gluster.org/#/q/status:open+proj​ect:glusterfs-quota+branch:quota-improvements,n,z
10:23 glusterbot <http://goo.gl/L6vtJZ> (at review.gluster.org)
10:25 toad http://www.bpaste.net/show/ictxFcho59LQrRjOEODX/
10:25 glusterbot <http://goo.gl/cEhMaF> (at www.bpaste.net)
10:25 toad hagarth, yeah i read it
10:26 toad but its not sufficient, not answering some issues and i cant wait for it before implementing a working patch
10:26 hagarth toad: would you be interested in early testing of this feature?
10:26 toad and no i dont want to work on that branch
10:26 toad well not since it doesnt answer my problems
10:26 hagarth toad: what are your problems?
10:26 toad i need an efficient quota system that can handle at least 20,000 entries without a sweat
10:27 zetheroo I have an interesting issue here ... one of the hosts in our other gluster is experiencing very slow read/write speed on it's gluster brick .... while the other host has no performance delays on it's brick
10:27 toad but the mechanics are quite horrible (looping several times on linked list at multiple occasions, reloading the entire configuration quite often)
10:27 toad and it doesnt answer my current question anyway which is : how the client get the quota limit value :)
10:28 toad i only patched the server so far, maybe i need to do the same with client (since it still uses limit-usage value)
10:28 toad but its weird, im wondering how / where its being passed to the client
10:29 ctria joined #gluster
10:29 toad i thought the lgetxattr() would be intercepted by fuse, sent to the server for an answer (a loop on quota_limits typedef if local->limit aint set) and sent back to the client
10:30 zetheroo I would reboot the host which is having the issue but I dread doing that as it seems that when on brick in the gluster goes down the other brick becomes read-only and then we have a whole new set of issues
10:32 toad hagarth, but yeah i would be interested if some additional things are added
10:32 toad (interested in early testing)
10:38 manik joined #gluster
10:38 manik joined #gluster
10:41 mohankumar joined #gluster
10:54 toad no answer hm ? :)
10:54 hagarth joined #gluster
10:54 toad needs some crazy developer
10:59 kkeithley joined #gluster
11:05 partner is there anything i can do to speed up fix-layout? i added fourth server and brick from it to a distributed volume and fix-layout will probably take some 200 hours or so for the 65k + 65k directory structure?
11:06 mohankumar joined #gluster
11:06 partner it did took quite a time already with third server, let me check how long..
11:08 partner hmm some 4,5 days
11:08 toad nice
11:08 toad i sure hope you can do it manually
11:09 partner 131588 directories on that volume..
11:09 toad question to developers: whats the best use of api in glusterd-volgen.c to write its own configuration file (i wanna store my quotas in a different file)
11:10 partner and it takes some maybe 5 sec per directory on average, maybe its limited a bit so that it would not load too much the live system..
11:10 toad i miss the days where devs were around, maybe i should get on the ML
11:11 partner aren't they a bit on different timezone?
11:11 toad true
11:11 toad 5 am :D
11:12 sgowda joined #gluster
11:12 hagarth toad: what time zone are you in?
11:13 toad GMT+1 baby !
11:13 partner GMT+3 here
11:13 partner i recall.. :)
11:13 partner brains still on holiday mode
11:14 partner i couldn't care less about the time then :)
11:14 hagarth toad: here are some of the changes in new quota
11:14 hagarth a) no quota directories mentioned in volfiles (it will be stored separately)
11:14 toad cant you just answer about the current way of how quotas work ?
11:14 toad ah really ?
11:14 toad didnt see that part
11:15 hagarth toad: hear me out, will get back to your questions :D
11:15 toad alright alright :)
11:15 hagarth toad: b) all enforcement happening on the server
11:15 hagarth c) changes to handle nfs clients better
11:15 manik joined #gluster
11:16 hagarth d) no runtime / in memory list of configured quotas
11:16 toad oh
11:16 toad so no more stupid rereading config file
11:16 hagarth e) better mechanism to list quotas from CLI
11:16 toad i see a quotad ?
11:17 toad ah its brand new stuff
11:17 hagarth quotad aggregates values on demand
11:17 hagarth it doesn't store/parse any configuration
11:17 toad hm ?
11:18 hagarth this mechanism should enable us to scale better wrt quotas
11:18 toad sounds exciting
11:18 toad memory + differnet config file for quota
11:18 hagarth toad: once an early release is available, i will notify you
11:18 toad oh cool
11:19 toad there is no code yet for quotad ?
11:19 toad did you improve the search through linked lists as well ?
11:19 hagarth toad: there is .. the code is being re-worked on
11:19 toad oh ok
11:19 hagarth toad: there is no linked list at all in the new scheme ;)
11:19 toad cant wait to try it out then
11:19 toad ah THANK GOD
11:19 toad so hash tables all the way I take it ?
11:20 toad i was wondering why you guys wrote a dict type to use a damn linked list right after :)
11:20 toad its weird, those changes werent there 1 week ago, am I wrong ? :)
11:20 hagarth toad: no hash tables too .. since there's no in memory representation (we might consider a in-memory key value store later)
11:21 toad no ? so how you do retrieve quotas ?
11:21 toad the current implementation: http://www.bpaste.net/show/jtBgUzKABmNrWbM5YKJg/
11:21 glusterbot <http://goo.gl/RCMcB> (at www.bpaste.net)
11:21 hagarth toad: yeah, we are talking about 3 different implementations of quota .. what you have is v1, the one we talked about a week is v2, this is v3.
11:21 harish_ joined #gluster
11:21 toad oh yeah
11:21 toad did I motivate it ? :P
11:22 hagarth toad: yes, we heard you :D
11:22 toad really ? haha :D
11:22 toad i should receive a freakin medal
11:23 toad i gave the replicate name to the mirroring mechanism I remember too :P
11:23 hagarth toad: yeah, gluster-quota-scalability-motivater or something like that :D
11:23 toad well in that case you rock
11:23 toad but better be warned
11:23 toad im gonna test against 200K quotas
11:23 toad then 1M just for fun
11:24 hagarth 200K could still be out of bounds for us .. but let us give it a run!
11:24 kkeithley toad: prepare to be disappointed
11:24 toad well so far i will need 12,000 but that might increase fast
11:24 toad 200K is my top limit and i wont reach it before long thats for sure
11:25 hagarth toad: do you have 12k user directories?
11:25 toad but my way of doing things scalable is to test them with huge numbers, so I can see right away problems in design
11:25 toad hagarth, yup
11:25 toad Im so crazy I was thinking of putting mysql on gluster too, and why not put quotas as well on it
11:26 toad well i need to try it first, not sure whats the behavior of mysql in that case :D
11:26 ctria joined #gluster
11:27 hagarth toad: let us know how it goes :D
11:28 toad well I cant wait for your release
11:28 toad i was thinking to finish the patch for tomorrow and start the migration
11:28 toad but I guess i will wait for your patches, when do you think it could be tested ?
11:29 hagarth toad: next week hopefully
11:30 hagarth if there's anything earlier, will notify you
11:30 toad cool
11:31 toad next week i might start my yearly european road trip
11:31 shubhendu joined #gluster
11:31 hagarth wow cool
11:33 toad yeah need to visit my family and customers
11:33 toad but waow guys thank you so much
11:33 toad is avati working on that stuff ?
11:34 hagarth avati's not coding, but he is helping with the design.
11:36 satheesh joined #gluster
11:42 toad that lazy punk :P
11:57 CheRi joined #gluster
12:00 y4m4_ joined #gluster
12:01 manik joined #gluster
12:01 tziOm joined #gluster
12:04 satheesh joined #gluster
12:04 ctria joined #gluster
12:20 kkeithley1 joined #gluster
12:23 nightwalk joined #gluster
12:23 hagarth joined #gluster
12:25 manik joined #gluster
12:26 theron joined #gluster
12:29 shubhendu joined #gluster
12:45 aliguori joined #gluster
12:51 edward1 joined #gluster
12:52 bet_ joined #gluster
12:54 sprachgenerator joined #gluster
12:59 vimal joined #gluster
13:02 vpshastry joined #gluster
13:05 hybrid5122 joined #gluster
13:18 C^^ joined #gluster
13:19 C^^ left #gluster
13:20 deepakcs joined #gluster
13:21 plarsen joined #gluster
13:31 theron joined #gluster
13:34 hybrid512 joined #gluster
13:47 kaptk2 joined #gluster
13:57 asias joined #gluster
13:57 bugs_ joined #gluster
14:02 shubhendu joined #gluster
14:05 \_pol joined #gluster
14:06 SynchroM joined #gluster
14:09 dusmant joined #gluster
14:13 guigui1 joined #gluster
14:26 \_pol joined #gluster
14:33 theron joined #gluster
14:40 jthorne joined #gluster
14:41 harish_ joined #gluster
14:43 \_pol joined #gluster
14:44 jthorne joined #gluster
14:46 \_pol_ joined #gluster
14:49 \_pol joined #gluster
14:49 neofob joined #gluster
14:50 duerF joined #gluster
14:52 \_pol joined #gluster
14:56 recidive joined #gluster
14:58 saurabh joined #gluster
15:04 zaitcev joined #gluster
15:06 madphoenix joined #gluster
15:07 daMaestro joined #gluster
15:08 madphoenix Hi all.  I have some questions about the gluster nfs service.  We're looking to use it because it exhibits much better small block performance, but I'm not sure what the availability scheme is there.  When you connect via a nfs to a single server, does the client only interact with that server for NFS operations, or is it doing some sort of pNFS and talking to all bricks like the native fuse client?
15:12 ndevos madphoenix: no, magic there, all goes to the server that you used for mounting (unless that is a virtual IP, in which case it servers can change after a re-connect)
15:13 manik joined #gluster
15:16 bala joined #gluster
15:17 vpshastry left #gluster
15:18 zetheroo left #gluster
15:22 madphoenix ndevos:  thanks.  anybody know if there is there a solution in sight for the small synchronous write performance problems in the fuse client?
15:28 manik joined #gluster
15:35 rjoseph joined #gluster
15:39 hagarth joined #gluster
15:44 manik1 joined #gluster
15:45 manik1 joined #gluster
15:47 chirino joined #gluster
15:51 rcheleguini joined #gluster
15:53 dhsmith joined #gluster
16:00 raghug joined #gluster
16:02 mooperd_ joined #gluster
16:09 rcheleguini joined #gluster
16:11 rcheleguini_ joined #gluster
16:11 sprachgenerator joined #gluster
16:12 dhsmith joined #gluster
16:12 piotrektt joined #gluster
16:12 piotrektt joined #gluster
16:13 rcheleguini joined #gluster
16:21 Mo_ joined #gluster
16:24 zombiejebus joined #gluster
16:25 kanagaraj joined #gluster
16:27 rwheeler joined #gluster
16:31 _pol joined #gluster
16:32 kanagaraj_ joined #gluster
16:37 Technicool joined #gluster
16:39 kkeithley joined #gluster
16:46 jebba joined #gluster
16:55 plarsen joined #gluster
17:02 thomaslee joined #gluster
17:05 shruti joined #gluster
17:06 bulde joined #gluster
17:08 duerF joined #gluster
17:10 guigui1 left #gluster
17:13 kkeithley joined #gluster
17:18 jthorne joined #gluster
17:26 wushudoin joined #gluster
17:32 _pol Are there some recommended opts for mounting a gluster brick?
17:32 wushudoin left #gluster
17:34 TuxedoMan joined #gluster
17:35 pea_brain joined #gluster
17:35 vpshastry joined #gluster
17:35 shruti joined #gluster
17:53 pea_brain joined #gluster
18:08 semiosis https://github.com/unbit/uwsgi-​docs/blob/master/GlusterFS.rst
18:08 glusterbot <http://goo.gl/xsTKkT> (at github.com)
18:09 pea_brain joined #gluster
18:13 devopsman left #gluster
18:13 kkeithley joined #gluster
18:14 samppah _pol: xfs? noatime,inode64 and nobarrier if you have battery backed raid controller
18:18 kkeithley joined #gluster
18:22 manik joined #gluster
18:27 pea_brain joined #gluster
18:28 Recruiter joined #gluster
18:42 pea_brain joined #gluster
18:42 ipalaus joined #gluster
18:42 ipalaus joined #gluster
18:46 pea_brain joined #gluster
18:49 chirino joined #gluster
18:54 _pol samppah: all of those for battery-backed or just that last option for battery-backed?
18:55 semiosis just the last
18:56 JoeJulian semiosis: Yeah, I've been following that on gluster-devel
18:58 _pol are those for mount options or mkfs.xfs options or both?
19:01 JoeJulian inode64 is really only any benefit if your brick exceeds a certain size... I think it's 1Tb but it might be 2... I don't remember off the top of my head. And those are mount options.
19:02 _pol My bricks will be limited to 3TB
19:02 _pol But they will all be 3TB
19:02 JoeJulian Then it's useful. :)
19:02 redragon_ joined #gluster
19:02 redragon_ hello
19:02 glusterbot redragon_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:02 _pol Ok, does the "-i size=" matter a lot?
19:03 JoeJulian http://xfs.org/index.php/XFS_FAQ#Q:_​What_is_the_inode64_mount_option_for.3F
19:03 glusterbot <http://goo.gl/tT4wio> (at xfs.org)
19:03 JoeJulian _pol: According to recent analysis, no.
19:03 redragon_ so I am mounting useing fusefs against a node I know is not running but mount does not return an error code, it appears to succeed but nothing is actually mounted, thoughts?
19:05 JoeJulian redragon_: What's not running on the server? The brick service? The management service? both? What's your mount command?
19:06 redragon_ glusterd
19:06 redragon_ mount -t glusterfs node:VolTest /mnt/test
19:07 redragon_ for background this is gluster installed on rhel from epel
19:07 JoeJulian Is "node" the hostname of that server or is it an rrdns entry?
19:07 JoeJulian @yum repo
19:07 glusterbot JoeJulian: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
19:07 redragon_ node is the hostname
19:09 JoeJulian Hmm, then that is odd. I would expect it to fail since it can't get it's configuration if glusterd isn't running. Check your client log for clues.
19:09 pea_brain joined #gluster
19:10 JoeJulian I mention the official yum repo because the epel stuff is old and is no longer maintained.
19:11 redragon_ ahhh
19:11 xymox joined #gluster
19:11 redragon_ then tell you what, I will redo everything and test using your recommended repo and see if the problem replicates itself
19:12 JoeJulian Sounds like a plan.
19:15 manik joined #gluster
19:16 ricky-ticky joined #gluster
19:16 pea_brain joined #gluster
19:17 ThatGraemeGuy joined #gluster
19:19 Gilbs1 left #gluster
19:21 spligak joined #gluster
19:23 daMaestro joined #gluster
19:31 awayne joined #gluster
19:32 awayne hello all
19:33 awayne i'm getting ready to upgrade to 3.4. does anyone know if  UFO in 3.4 still uses fuse? was it updated to use libgfapi?
19:44 pea_brain joined #gluster
19:58 spligak Are there any tunables for the fuse driver (or, "glusterfs native client") aside from the mount options?
20:00 pea_brain joined #gluster
20:00 johnmorr joined #gluster
20:01 spligak On a single mount point, I'm able to get 500-600 IOPS to a single distributed-only volume. As that's not the capacity of the underlying drives, I can easily mount the same volume at a second location and get another 500-600 IOPS.
20:01 spligak While that's fine - I can mount the same volume a bunch of times and randomly select one from my app code to write to - that seems kinda silly.
20:02 spligak Is there any way to increase the performance (client-side) of each mount? Or am I sorta tapped out in terms of performance?
20:03 samppah spligak: what version you are using?
20:05 spligak samppah, glusterfs 3.2.5 built on Jan 31 2012 07:39:59
20:06 recidive joined #gluster
20:06 spligak samppah, on Ubuntu 12.04
20:08 SynchroM joined #gluster
20:08 spligak samppah, has there been significant performance improvements since then in the fuse driver?
20:08 redragon_ JoeJulian, just so you know, a clean install using your repo does give a failed to mount error
20:09 _pol joined #gluster
20:09 jesse joined #gluster
20:10 samppah spligak: i have seen lots of improvements with 3.4.. also i'm not sure what's the status of fuse in ubu tu
20:10 redragon_ hypothetical question here, you have your mount point/directory assigned to gluster (ie: /mnt/glusterfs) and thats whats plugged into the volume
20:10 JoeJulian redragon_: intersting.
20:10 samppah *ubuntu
20:10 redragon_ will it break anything to alter data in that mount point directly or should one always use a mount (ie nfs/fuse) to interact with gluster
20:10 JoeJulian Always use the mount.
20:11 redragon_ thanks, figured as much but wanted to confirm.  my guess is that meta data wouldn't get tracked if you wrote directly to it
20:11 JoeJulian That too... Most of the magic is done client side.
20:11 aliguori joined #gluster
20:14 redragon_ okay one more theoretical, if I have 4 nodes, each node has a raid 0 for disk infrastructure, if a raid 0 breaks on node 2 will gluster remove that node after detecting issues in node 2?
20:30 pea_brain left #gluster
20:34 plarsen joined #gluster
20:35 mibby- joined #gluster
20:46 redragon_ so I killed the underlying info on a node, rebooted it, it can't mount the device that the data was on because its a broken raid 0 yet glusterd starts without issue and gluster volume info VolTest shows all bricks up without any issue
20:46 redragon_ however node2 doesn't have any data on it, shouldn't gluster replica detect this?
20:47 redragon_ its replicating it back out
20:53 spligak Trying out the NFS support now. Any reason why an NFS mount would hang when attempting to mount? I've set the "nfs.disable" flag to "off" - figured it would be fairly plug-n-play.
20:55 dbruhn what version of NFS client are you trying to use
20:55 dbruhn last I remember v4 was not supported
20:55 dbruhn but that might of changed in 3.4
20:59 spligak dbruhn, looks like the default for 12.04 is v4
20:59 spligak dbruhn, was trying to get it to force v3 proto - that didn't seem to help
21:00 spligak there's a flurry of network activity on the storage servers, but it just hangs client side. and yeah, using 3.2 so that might be an issue.
21:00 dbruhn I have no idea of 3.2 issues, I am on 3.3.1 at the moment
21:01 spligak dbruhn, yeah. I think I need to rip this down and stand up the latest stable. I feel like a lot of these issues would just go away.
21:02 dbruhn might for sure, are you using ethernet or IB
21:03 dbruhn are you housing data on the system now?
21:04 spligak no, just a smallish ~30 brick test cluster. trying to get higher IOPS-per-mount than the 600 or so I'm seeing now. figured NFS would be worth a shot.
21:04 spligak eryc, sorry - ethernet
21:04 spligak *er
21:04 dbruhn hfs is better for small file I/O
21:04 dbruhn nfs
21:04 dbruhn how many disks per brick
21:06 spligak just one per brick - was going to play with striping across a couple drives or something later. no idea what's optimal there. open to suggestions.
21:07 dbruhn oh you are way overshooting processing and ram per i/o then and undercutting yourself on disk i/o
21:08 dbruhn I am running these 2u serves with 12 15k SAS drives in them divided up in to two raid 5's
21:08 dbruhn the hardware raid aggregates the i/o nicely
21:09 dbruhn well the ram/proc statement is a complete assumption too
21:09 dbruhn but you can get used ddr infiniband gear for pretty cheap and it will improve your i/o dramatically
21:09 dbruhn how many servers are you running these 30 bricks on
21:09 _pol Why in the gluster community docs does it recommend to use '-i size=512' on the xfs formatting when it doesn't really help?
21:10 Technicool _pol, there was concern about not having sufficient inode space for the gluster xattrs
21:10 _pol Technicool: but that has proved to not be the case?
21:11 spligak dbruhn, 3 right now - but I haven't allocated everything just yet. these are a little larger than I'd hope to use. 24-drive, 100gb+ memory, couple 8-core chips.
21:11 Technicool _pol, i hadn't heard about it before JoeJulian mentioned it earlier
21:11 Technicool that it wasnt considered an issue anymore i mean
21:12 dbruhn spligak: my 12 drive chassis are 16GB of ram, two 4 core 2.53 xeon proc's, don't remember which ones and I am running my application on the same servers that needs the shared storage
21:13 spligak dbruhn, if I'm running a replica 3, I'm a little concerned about introducing parity or 1+0. would it not make sense to band these together in a couple large stripes and expose those as bricks?
21:13 spligak that might not make sense for recovery. again, haven't dealt with it operationally.
21:14 dbruhn when running replication your writes become slower and your reads should improve, just a point of interest
21:15 dbruhn I have raid 5 on mine so I can lose a drive and can rebuild the raid on the fly, easier than trying to replicate the data back to a brick if it's a ton of data
21:15 dbruhn i am in a replica 2 dht environment
21:16 georgeh|workstat joined #gluster
21:16 dbruhn the gluster stripping will slow things down considerably
21:17 spligak dbruhn, oh, I was thinking about stripping using the local raid card. take some of the load off gluster in that regard.
21:17 georgeh|workstat joined #gluster
21:17 spligak fewer, larger, faster bricks? would that help?
21:18 spligak fewer on a per-server basis, I guess. overall it'll end up being quite a few.
21:18 dbruhn Yeah, I use hardware raid, I think it's a balancing act.
21:18 dbruhn don't want to get to big or small
21:18 dbruhn what are your file characteristics
21:19 spligak many, many, tiny files. <= 16KB on average
21:20 JoeJulian Technicool, _pol: http://www.mail-archive.com/gluste​r-users@gluster.org/msg11888.html
21:20 glusterbot <http://goo.gl/kGBFjI> (at www.mail-archive.com)
21:21 spligak dbruhn, which sort of mount are you using to jump into the cluster? nfs or native?
21:21 dbruhn spligak: just sent you a PM
21:22 JoeJulian ~nfs | spligak
21:22 glusterbot spligak: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
21:23 spligak JoeJulian, thanks! I'll confirm all that.
21:23 dbruhn I am using the native fuse client
21:23 dbruhn with small files the network is a huge deal
21:23 dbruhn what kind of spindles are you using?
21:23 dbruhn and what are you i/o characteristics, heavy in read or write?
21:25 spligak oops, dbruhn copy/paste from pm: actually hitting gluster will be almost exclusively writes. reads on app-level cache miss only
21:25 dbruhn why replica 3 then?
21:25 dbruhn just three computers sharing storage?
21:27 JoeJulian wrt "small files" I think semiosis put it very eloquently yesterday: "replication is sensitive to latency. the penalty is amortized over larger data ops, but with tiny data ops just checking if the replica metadata is in sync dominates the operation"
21:27 spligak oh no, this is just a test cluster, replica 3 because of the guarantee we're advertising is all. especially if I'm going to recommend large non-redundant bricks or something.
21:27 spligak any production cluster would be much larger.
21:28 dbruhn I am with Joe here, 1gb ethernet has a latency of 7200 nanoseconds, ddr IB will be about 140 nano seconds, it makes all the difference in the world for small file operations, and qdr is 100 nanoseconds
21:28 redragon_ is there a way to force a replication test, say after you replace disks in a brick and want to replicate the data back
21:29 dbruhn 10gb ethernet is still like 2400 nanoseconds
21:29 JoeJulian gluster volume heal $vol full
21:29 dbruhn that really adds up
21:29 redragon_ that is an awesome command name
21:30 spligak dbruhn, I can recommend and get 10gb - we don't run any ib that I'm aware of, unfortunately. that's a pretty strong argument for it, however.
21:30 dbruhn well 10GB is still a 3 fold improvement over 1GB
21:30 recidive joined #gluster
21:30 dbruhn I would suggest at least for testing pick up some used 20GB DDR IB stuff. I just bought 10 cards, cables, and a 24 port switch for about 900 on the used market after shipping
21:31 dbruhn new cables
21:31 dbruhn jclift is a good resource for finding that stuff, he is always shopping it and makes great suggestions
21:32 dbruhn also what are you using for hard disks?
21:32 spligak JoeJulian, the replica metadata verification happens for all read/write ops, correct?
21:32 redragon_ JoeJulian, thats failing and nothing in messages or the log file in bricks
21:33 JoeJulian spligak: It happens on lookup(). If you keep the file open, it's not an issue.
21:33 JoeJulian redragon_: I assume you replaced $vol with your volume name?
21:33 dbruhn spligak, also I would suggest reducing your replica count, it slows your writes down every replica you add
21:34 * JoeJulian runs replica 3
21:34 spligak dbruhn, seagate 15K SAS with probably something like Adaptec ASR72405's
21:34 redragon_ JoeJulian, yup yup
21:34 dbruhn so similar to what I am running, best you're going to do without going to SSD
21:34 redragon_ gluster volume heal VolTest full
21:34 redragon_ Launching Heal operation on volume VolTest has been unsuccessful
21:34 redragon_ but not sure which log it wants me to look at
21:35 JoeJulian With replica 3 the self-heal check happens nearly in parallel, so that's not much extra overhead. Writes are concurrent, so network throughput is divided 3 ways instead of 2, but that's an engineering problem.
21:35 spligak JoeJulian, how about underlying brick structure? single-drive or some level of raid?
21:36 JoeJulian redragon_: handy... should have a clue in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
21:36 JoeJulian spligak: For my purposes, lvm lv's on single drives.
21:36 redragon_ getting connection attempt failed in there
21:36 redragon_ then 0-management: found brick
21:37 JoeJulian redragon_: ,,(processes)
21:37 glusterbot redragon_: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
21:37 JoeJulian redragon_: Make sure your glusterd's are all running, and the self-heal daemon (glusterfs process with glustershd stuff on the command line)
21:38 JoeJulian You should be able to check all of those with "gluster volume status"
21:39 redragon_ everything is up except for the brick in question, all the self heal is up
21:39 redragon_ Brick student137:/mnt/glusterfsN/ANN/A
21:39 redragon_ thats the desturbing one
21:40 JoeJulian redragon_: It would appear that brick's not up then. On student137 you should be able to "gluster volume start $vol force" to restart that brick.
21:40 redragon_ so i killed the raid setup and put in new drives (these are vms) and rebuild raid, would that have an effect on it?
21:41 redragon_ k
21:41 redragon_ think its because the whole thing got nuked, apparently including uuids
21:42 JoeJulian The brick service should be able to restore that.
21:42 redragon_ k
21:42 redragon_ the volume start comes back with volume already started
21:43 redragon_ if I use force I get, volume start: VolTest: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /mnt/glusterfs. Reason : No data available
21:43 JoeJulian Hmm, that must be new...
21:43 JoeJulian Guess I know what my next blog post is going to be.
21:44 redragon_ running 3.4.0-3
21:44 redragon_ from the repo you pointed me to
21:44 JoeJulian Yeah, 3.4's new and (apparently) that's a new "feature".
21:44 JoeJulian I say it in quotes, but it probably is supposed to be a feature.
21:45 redragon_ heh
21:45 redragon_ is there a way to drop a brick and put it back in as new?  I tried gluster peer detach student137
21:45 redragon_ but get peer exist in cluster
21:45 JoeJulian I wonder if you can "replace-brick" using the same brick for source and destination...
21:46 redragon_ i'll try anything, thats why they call it a test environment heh
21:46 JoeJulian redragon_: To remove a brick you would use "remove-brick"
21:47 redragon_ if there is a specific command you want me to test i'll be more than glad to
21:49 JoeJulian Are you going to be around in a few hours or tomorrow? I've got some $dayjob tasks that I've got to finish today and I have a conference call I need to take in an hour. I want to spin up some VMs and duplicate this so I can document a procedure to handle it.
21:49 redragon_ not tomorrow but I can be online friday no prob
21:49 redragon_ we are having a team outing tomorrow so I'll be boozing it up on the lake heh
21:49 JoeJulian :)
21:50 JoeJulian I'll probably have your solution on my blog (http://joejulian.name) tomorrow.
21:50 glusterbot Title: JoeJulian.name (at joejulian.name)
21:51 redragon_ your help has been greatly appreciated
21:51 redragon_ fyi replace-brick fails because the brick i'm replacing is not online, which is the whole issue to begin with heh
21:52 JoeJulian Do it with "commit force" instead of "start"
21:52 chirino joined #gluster
21:53 redragon_ k
21:53 jebba joined #gluster
21:53 redragon_ gluster volume replace-brick VolTest student137:/mnt/glusterfs student137:/mnt/glusterfs commit force
21:53 redragon_ volume replace-brick: failed: Brick: student137:/mnt/glusterfs not available. Brick may be containing or be contained by an existing brick
21:53 redragon_ so can't do itself
21:55 redragon_ what is used to reduce replica on a volume?
21:57 JoeJulian remove-brick with a replica stanza
21:57 redragon_ okay, makes sense
21:58 redragon_ this is actually really cool tools, i'm not used to really cool tools
21:58 _pol joined #gluster
21:59 redragon_ sweet just removed it and added it and its all fixed
22:00 JoeJulian Which is fine for what you're actually doing, but if you have 500 bricks in replica 2....
22:01 redragon_ it may be because in my test its 4 bricks replica 4
22:01 redragon_ this round anyway
22:01 JoeJulian @replica
22:01 glusterbot JoeJulian: I do not know about 'replica', but I do know about these similar topics: 'check replication', 'geo-replication'
22:01 JoeJulian Damn... I need to add that piece of chastisement.
22:01 redragon_ that should point to your replication dos and donts heh
22:02 JoeJulian seriously
22:02 redragon_ was just reading that page
22:03 redragon_ i run a fair sized mirror infrastructure and looking for options in replacing this god awful GFS2
22:03 redragon_ so started experimenting with gluster today
22:04 JoeJulian @learn replica as Please see http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/ for replication guidelines.
22:04 glusterbot JoeJulian: The operation succeeded.
22:04 redragon_ gotta love smart bots
22:04 JoeJulian I'd have pulled out all my hair long ago without it.
22:05 redragon_ well with nearly 20 years on irc it definately wouldn't be the same without info bots
22:06 redragon_ and with that I thank you again and leave you to work
22:13 MugginsM joined #gluster
22:32 duerF joined #gluster
22:46 SynchroM joined #gluster
23:00 chirino joined #gluster
23:32 fidevo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary