Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:47 hchiramm__ joined #gluster
01:18 gdubreui joined #gluster
01:29 _polto_ joined #gluster
01:50 hchiramm__ joined #gluster
01:58 DV__ joined #gluster
02:05 _BryanHm_ joined #gluster
02:14 mattapp__ joined #gluster
02:18 psyl0n joined #gluster
02:20 mattappe_ joined #gluster
02:31 cogsu joined #gluster
02:34 mattapp__ joined #gluster
02:41 harish joined #gluster
03:14 hybrid5121 joined #gluster
03:19 kshlm joined #gluster
03:19 mattapp__ joined #gluster
03:21 mattapp__ joined #gluster
03:35 mattapp__ joined #gluster
03:36 TDJACR joined #gluster
03:38 kanagaraj joined #gluster
03:42 shubhendu joined #gluster
03:47 satheesh1 joined #gluster
03:51 dusmantkp_ joined #gluster
03:52 itisravi joined #gluster
03:55 sgowda joined #gluster
04:01 mohankumar joined #gluster
04:08 mattappe_ joined #gluster
04:09 RameshN joined #gluster
04:12 davinder4 joined #gluster
04:14 shyam joined #gluster
04:16 hagarth joined #gluster
04:18 ppai joined #gluster
04:22 vpshastry1 joined #gluster
04:28 CheRi joined #gluster
04:30 hchiramm__ joined #gluster
04:32 mohankumar joined #gluster
04:40 ndarshan joined #gluster
04:45 MiteshShah joined #gluster
04:51 hchiramm__ joined #gluster
05:06 raghu joined #gluster
05:14 psharma joined #gluster
05:20 vpshastry joined #gluster
05:23 hchiramm__ joined #gluster
05:25 vpshastry2 joined #gluster
05:27 dylan_ joined #gluster
05:30 kshlm joined #gluster
05:36 aravindavk joined #gluster
05:36 DV__ joined #gluster
05:42 saurabh joined #gluster
05:51 solid_liq joined #gluster
05:54 solid_liq I'm trying to find a filesystem to suit my network filesytem needs.  Basically, I have a workgroup of about 5 people running Ubuntu 12.04 LTS (well, one is on Debian 7).  I would like for all of our /home/ directory data (not the configs, just the data we keep) to be stored on a server so that it can be backed up automatically by our IT department's backup system.  I don't want to have people unable to work if the server goes down due to ...
05:54 solid_liq ... unavailable data and hung processes, so I'm okay with just mirroring the data to the network filesystem share (so long as it's automatic).  I only have one server to use, so I can't stripe across multiple nodes (and I probably won't get budget approval for that for a while).  So, can you tell me if GlusterFS might work for what I want to do?  Thanks.
05:55 solid_liq Oh, and I've already separated the data from the configs onto a separate partition, currently mounted on /srv/
05:55 shubhendu joined #gluster
06:01 MrNaviPacho joined #gluster
06:01 MrNaviPacho Is it possible to stop gluster edit data in the brick and then enable it again?
06:03 meghanam joined #gluster
06:05 vpshastry1 joined #gluster
06:09 satheesh joined #gluster
06:10 bulde joined #gluster
06:12 lalatenduM joined #gluster
06:13 hchiramm__ joined #gluster
06:17 shylesh joined #gluster
06:19 shri joined #gluster
06:22 krypto joined #gluster
06:26 dylan_ Hi, I know the xfs is the recommended filesystem for bricks in glusferfs but due to some limitation I have planned to use ext3 for 100G size of disk partion in production. I just need to know there is nothing wrong using it but some performance panelty.
06:26 dylan_ Is my understanding is correct..?
06:30 davinder4 joined #gluster
06:35 Alex dylan_: I am not an expert, but as I understand i it should be "okay". I'm having some issues with unicode (ext4) but I think that might be unrelated at the moment
06:35 bala joined #gluster
06:37 anands joined #gluster
06:40 ngoswami joined #gluster
06:42 hchiramm__ joined #gluster
06:46 hagarth joined #gluster
06:47 sgowda joined #gluster
06:48 vpshastry1 joined #gluster
06:49 vshankar joined #gluster
06:52 dylan_ Alex: my options are ext3 and ext4. I'd read some articals and shows that ext4 has some issues (maybe those articals are very old). Hence I planned to use ext3 instead of ext4. Hope your unicode issue is unrelated to the filesystem that you use.
06:53 Alex I hope so too! ;-)
06:53 shyam joined #gluster
07:19 shyam joined #gluster
07:23 jtux joined #gluster
07:27 gluslog joined #gluster
07:28 shri joined #gluster
07:32 sticky_afk joined #gluster
07:32 stickyboy joined #gluster
07:33 shylesh joined #gluster
07:33 hagarth joined #gluster
07:36 badone joined #gluster
07:39 satheesh1 joined #gluster
07:43 shubhendu joined #gluster
07:45 crashmag_ joined #gluster
07:45 vpshastry1 joined #gluster
07:48 sgowda joined #gluster
08:06 StarBeast joined #gluster
08:15 shyam joined #gluster
08:19 eseyman joined #gluster
08:27 shylesh joined #gluster
08:33 ndevos dylan_, Alex: there was a ,,(ext4) problem, and that applied to ext3 too
08:33 ndevos @ext4
08:34 ndevos glusterbot: ping?
08:35 ndevos well, I think glusterbot should point to http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/
08:39 mgebbe joined #gluster
08:40 mgebbe_ joined #gluster
08:40 shubhendu joined #gluster
08:41 shubhendu joined #gluster
08:41 hagarth ,,ext4
08:41 hagarth @ext3
08:41 hagarth @help
08:41 samppah hi
08:41 T0aD hello
08:41 hagarth glusterbot: are you mute today? :)
08:41 T0aD hi
08:42 samppah we have lost him :(
08:42 T0aD i guess he's deaf or mute
08:42 hagarth :(
08:54 vpshastry1 joined #gluster
08:54 crashmag joined #gluster
08:56 solid_liq he's deaf-mute
08:57 vimal joined #gluster
08:58 MiteshShah joined #gluster
09:01 crashmag joined #gluster
09:21 geewiz joined #gluster
09:23 calum_ joined #gluster
09:26 jag3773 joined #gluster
09:27 shubhendu joined #gluster
09:29 21WABT69Z joined #gluster
09:29 1JTABBYVA joined #gluster
09:39 davidbierce joined #gluster
09:41 _polto_ joined #gluster
09:42 ctria joined #gluster
09:46 vshankar joined #gluster
09:52 hagarth joined #gluster
09:54 dylan_ ndevos: thanks for the tip. I have 2.6.18-308.11.1.el5 kernel and hope I have no impact while my bricks are running on ext3
09:55 ndevos dylan_: if you're running glusterfs 3.4.1 you should be fine
09:55 dylan_ Yes, I run 3.4.1
09:57 dylan_ I prefer xfs but i'm on Redhat system that ask me Storage subscription to run xfs modules; hence i have to settle with ext3 if i don't have any alternative to install xfs
10:02 samppah aww, that sucks
10:07 dkorzhevin joined #gluster
10:08 harish joined #gluster
10:10 dylan_ let me try and see ext3 and the brick size is 100G as of now. the requirenment to host 1m-5m size pdf files for users to download over the web.
10:17 rastar joined #gluster
10:20 gdubreui joined #gluster
10:21 dannyroberts_ joined #gluster
10:21 StarBeast joined #gluster
10:22 jclift joined #gluster
10:25 shubhendu joined #gluster
10:31 hagarth joined #gluster
10:43 Norky joined #gluster
10:46 keytab joined #gluster
11:05 edward1 joined #gluster
11:16 dkorzhevin joined #gluster
11:42 spandit joined #gluster
11:52 diegows joined #gluster
11:57 JordanHackworth joined #gluster
11:57 bivak joined #gluster
12:01 semiosis joined #gluster
12:10 vpshastry joined #gluster
12:11 itisravi joined #gluster
12:13 StarBeast joined #gluster
12:14 CheRi joined #gluster
12:20 mohankumar joined #gluster
12:23 ppai joined #gluster
12:25 rwheeler joined #gluster
12:32 Oneiroi joined #gluster
12:34 mohankumar joined #gluster
12:43 mattf joined #gluster
12:47 Oneiroi joined #gluster
12:47 Oneiroi joined #gluster
12:55 dneary joined #gluster
12:55 dneary joined #gluster
12:55 dneary ot cleared things up (for now)
12:55 dneary Some process was doing something funky to the disk
12:56 dneary Wrong channel, sorryt
12:58 CheRi joined #gluster
12:59 ngoswami joined #gluster
13:00 shyam joined #gluster
13:01 FilipeCifali_ joined #gluster
13:02 FilipeCifali_ Oh wrong channel LOL
13:03 FilipeCifali_ morning guys
13:05 keytab joined #gluster
13:07 FilipeCifali_ What can I do to debug messages like: INFO: task :18741 blocked for more than 120 seconds
13:11 hagarth joined #gluster
13:15 vpshastry joined #gluster
13:15 vpshastry left #gluster
13:15 ndarshan joined #gluster
13:18 _polto_ joined #gluster
13:18 anands joined #gluster
13:22 dusmant joined #gluster
13:32 rastar joined #gluster
13:32 psyl0n joined #gluster
13:47 japuzzo joined #gluster
13:50 LoudNoises joined #gluster
13:51 davidbierce joined #gluster
13:52 MrNaviPacho joined #gluster
13:59 mohankumar joined #gluster
14:04 mtanner joined #gluster
14:17 hagarth @help
14:19 chirino joined #gluster
14:25 hybrid5121 joined #gluster
14:35 johnmark joined #gluster
14:37 dbruhn joined #gluster
14:41 _BryanHm_ joined #gluster
14:42 bennyturns joined #gluster
14:43 neofob joined #gluster
14:44 tqrst ok fine mr. rebalance, it's not like I needed all that memory for other things anyway
14:51 FilipeCifali_ high number of clients connection to 2 bricks (replica 2) cluster is a bad idea?
14:51 FilipeCifali_ by high I mean 6 clients
14:51 FilipeCifali_ high concurrency over files (different files, same brick)
14:54 kaptk2 joined #gluster
14:55 dbruhn FilipeCifali_, not sure what you are asking
14:57 _pol joined #gluster
14:58 shyam joined #gluster
14:58 achuz joined #gluster
15:04 NigeyUK joined #gluster
15:04 NigeyUK afternoon :-)
15:06 georgeh|workstat joined #gluster
15:15 bugs_ joined #gluster
15:17 B21956 joined #gluster
15:23 ira joined #gluster
15:24 ira joined #gluster
15:32 wushudoin joined #gluster
15:35 Technicool joined #gluster
15:38 social hagarth: ping, are you alive?
15:44 jag3773 joined #gluster
15:46 mkzero joined #gluster
15:46 mkzero left #gluster
15:47 mkzero joined #gluster
15:50 sroy_ joined #gluster
15:53 dusmant joined #gluster
16:05 mkzero hi. i have a problem with the gluster volume top command on my gluster 3.4.1 cluster. it seems like the clear subcmd doesn't work completely. whenever i use it,  shortly afterwards old files reappear with their old count. this is, until i restart the brick - after that the count for all files is set to 0. is that the normal behaviour for gluster?
16:06 social mkzero: I'm not sure if I understand correctly but it sounds like bug, could you please report it?
16:09 vpshastry joined #gluster
16:10 vpshastry left #gluster
16:12 mkzero @social let's say i execute 'gluster volume top myvol read' - i get the stats as expected. then i execute 'gluster volume top myvol clear', wait a few seconds and reissue 'gluster volume top myvol read' and in the output there are stats from *before* the clear command
16:13 mkzero @social if it's a bug, where would i report it?
16:17 hagarth social: pong, seem to be alive :)
16:21 social mkzero: it works fine here
16:21 social I did gluster volume top volume open; got result
16:21 social gluster volume top volume clear ; gluster volume top volume open and it was clear
16:22 social mkzero: I have 3.4.1 here, hmm :/
16:22 _pol joined #gluster
16:23 social hagarth: well I'm hunting some memleak and I got a bit puzzled, do you have 5min? Please checkout release-3.4 in git and look at rpc/rpc-lib/src/rpcsvc.c
16:24 social line 363 calls rpcsvc_alloc_request which is wrapped up mem_get to req, in case of error after that req is just set to NULL
16:25 social hagarth: shouldn't there be some mem_put call or something? I think it's in rpcsvc_request_destroy but I don't see it in rpcsvc_request_create
16:26 social so (now goes only my theory) when I dos gluster with for example gluster volume status request so much it starts callocing memory in mem_get it'll leak the calloced memory here?
16:29 zerick joined #gluster
16:34 semiosis :O
16:36 gmcwhistler joined #gluster
16:39 mkzero @social maybe it's the version from the semiosis PPA? :/ (our gluster cluster is running on Ubuntu)
16:40 semiosis mkzero: you can file a bug here...
16:40 davinder4 joined #gluster
16:40 semiosis you can file a bug here...
16:41 semiosis glusterbot: meh
16:41 FilipeCifali joined #gluster
16:41 semiosis weird
16:41 semiosis glusterbot: reconnect
16:42 semiosis i think i am lagging :(
16:42 hagarth social: checking
16:43 semiosis joined #gluster
16:44 jbd1 joined #gluster
16:45 semiosis JoeJulian: is it just me or has glusterbot stopped working?
16:46 FilipeCifali I just though he didn't liked me
16:46 FilipeCifali thought*
16:47 georgeh|workstat joined #gluster
16:47 _pol joined #gluster
16:51 social hagarth: well I'm probably wrong as my Cfu is quite low :/
16:53 Gilbs1 joined #gluster
16:56 Gilbs1 Is there any additional documentation on the best way to recover from a disaster with your geo-replication slave volume?  Is this the best solution?  https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/ch11s05.html
16:56 neofob1 joined #gluster
16:59 hagarth social: your symptom is gluster volume status in a loop results in glusterd memory leak?
17:01 social hagarth: well something like it see bug 1032122
17:05 giannello joined #gluster
17:06 hagarth social: will check that over and try to re-create the problem. turning on memory accounting might help in determining the leak. maybe i should write a blog post about that.
17:13 _polto_ joined #gluster
17:14 FilipeCifali If I get this kind of message: kernel: INFO: task httpd:(randomPID) blocked for more than 120 seconds. when Apache is dealing w/ session, what can I do to debug how?
17:14 FilipeCifali the /session dir is stored over glusterfs
17:14 FilipeCifali w/ xfs partition
17:15 semiosis FilipeCifali: php sessions?
17:16 mkzero FilipeCifali: I get this message when one of the bricks is unavailable and apache is trying to access files on that brick. could this be the case?
17:16 jbd1 joined #gluster
17:16 FilipeCifali that's my case, but when that happen, all the clients have problems and apache hangs forever
17:17 FilipeCifali even sigterm -9 doesn't kill him
17:17 semiosis i suggest using memcache
17:17 semiosis instead of file-backed session storage
17:17 giannello FilipeCifali, using gluster for php sessions is not a great idea, it's considerably slower than using an in memory key/value store
17:19 giannello redis with replication, or a central memcache are better candidates
17:19 FilipeCifali my problem is that I can't customize all webmail versions to use memcache :(
17:19 semiosis FilipeCifali: you do it at the PHP level
17:19 giannello you can actually configure it at PHP level
17:19 semiosis the apps dont know the difference
17:19 semiosis right
17:19 FilipeCifali hmmm
17:19 FilipeCifali gonna check on that
17:19 semiosis it's real easy
17:19 giannello BTW, for this gluster problem: are you using the FUSE client or nfs?
17:20 FilipeCifali FUSE client
17:20 semiosis there are two php modules for memcache, confusingly called "memcache" and "memcached"
17:20 FilipeCifali I was thinking about switching to NFS
17:20 FilipeCifali but then it was like, why use gluster then? I could use parallel NFS or somethign else
17:20 FilipeCifali so I thought was my setup the problem, not gluster problem
17:20 semiosis i use the memcached one, and here is the PHP config i use... http://pastie.org/8523375
17:22 giannello FilipeCifali, do you have mirrored bricks?
17:22 FilipeCifali replica 2 setup, 2 servers only for glusterd
17:22 FilipeCifali but I'm starting w/ only 6 clients
17:23 FilipeCifali around 1600 hits at max
17:24 semiosis afk
17:25 giannello FilipeCifali, even if one of the two servers goes down, if the bricks are properly mirrored you should not have any problem
17:25 giannello does this happen as soon as the brick goes down or when it comes back, and self-heal starts?
17:28 FilipeCifali this actually happens in a weird way that I found where the logs should be now
17:28 FilipeCifali I have 2 bricks up
17:28 FilipeCifali then 1 client complain about writting
17:28 FilipeCifali then the other 5 clients complain
17:29 FilipeCifali I halt one brick server
17:29 FilipeCifali the clients can write again
17:29 FilipeCifali I get the brick server back online
17:29 FilipeCifali self-heal
17:29 FilipeCifali no more problems for 12 hours
17:35 Mo__ joined #gluster
17:41 mkzero_ joined #gluster
17:44 psyl0n joined #gluster
17:47 mgebbe___ joined #gluster
17:48 plarsen joined #gluster
17:51 vimal joined #gluster
17:52 giannello FilipeCifali, anything in the logs?
17:54 FilipeCifali I'm not quite sure in what I can post here
17:54 giannello use nopaste
17:54 FilipeCifali since it's a bunch of data, but no clear info
17:54 giannello or pastie.org
17:56 FilipeCifali god 260 Mb of log
17:56 FilipeCifali going to tail -5000
17:56 FilipeCifali not quite sure what to paste
17:59 mkzero_ joined #gluster
18:00 mkzero joined #gluster
18:00 FilipeCifali http://nopaste.info/f05068886a.html
18:00 FilipeCifali if I can grep some error message in that log
18:01 FilipeCifali let me know which messages
18:04 _pol joined #gluster
18:05 lpabon joined #gluster
18:06 giannello well, maybe lowering the loglevel from DEBUG to INFO could be a good choice
18:06 giannello also upgrading to 3.4.1 ;)
18:08 ccha joined #gluster
18:11 ccha joined #gluster
18:11 FilipeCifali yeah I'm running info now
18:12 FilipeCifali gonna paste today log
18:12 FilipeCifali oh, there's a way to fix log date?
18:12 FilipeCifali (actually, time)
18:12 FilipeCifali bricks]# date Mon Dec  2 16:12:17 BRST 2013
18:12 FilipeCifali but my logs are written like [2013-12-02 13:26:13.611619] E [marker-quota-helper.c:229:mq_dict_set_contribution] (-->/usr/lib64/glusterfs/3.4.0/xlator/debug/io-stats.so(io_stats_lookup+0x148) [0x7f758d399b47] (-->/usr/lib64/glusterfs/3.4.0/xlator/features/marker.so(marker_lookup+0x299) [0x7f758d5b0f8c] (-->/usr/lib64/glusterfs/3.4.0/xlator/features/marker.so(mq_req_xattr+0x5e) [0x7f758d5c12fa]))
18:12 FilipeCifali -3 hours
18:13 FilipeCifali http://nopaste.info/7b79e181b7.html
18:23 mkzero joined #gluster
18:27 bulde joined #gluster
18:28 psyl0n joined #gluster
18:35 vpshastry joined #gluster
18:40 dbruhn FilipeCafali, are you running "gluster volume status" and "gluster peer status" at the time you are having these errors to make sure everything is communicating appropriately
18:41 dbruhn You also seem to have mismatched versions of the client and server
18:42 dbruhn I see references to 3.4 and 3.3 in your log files
18:42 FilipeCifali my clients are all 3.4.1 (centos 6.4)
18:43 FilipeCifali and my servers are 3.4 (gentoo)
18:43 FilipeCifali but I'll force reinstall the clients
18:43 dbruhn [rpcsvc.c:1111:rpcsvc_submit_generic] 0-rpc-service: failed to submit message (XID: 0x765882x, Program: GlusterFS 3.3, ProgVers: 330, Proc: 26) to rpc-transport (tcp.gv0-server)
18:43 dbruhn start by checking your versions on everything
18:44 dbruhn something is saying 3.3
18:45 psyl0n joined #gluster
18:46 FilipeCifali ok
18:48 FilipeCifali both servers are running 3.4
18:48 FilipeCifali [2013-12-02 18:47:56.285923] I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.0 (/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/b4a3448540723854e34b6a4aa1545e1d.socket --xlator-option *replicate*.node-uuid=6764aa20-9dca-4bdd-beee-5bd52a07413c)
18:48 FilipeCifali [2013-12-02 18:47:20.482436] I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.0 (/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/33f806d9d52351d2c36a287d9cddcf57.socket --xlator-option *replicate*.node-uuid=e55a83c2-1bb1-4b7f-8ec7-6daa0e1e7cda)
18:48 dbruhn Match your version all to 3.4.1
18:49 FilipeCifali upgrade the servers?
18:50 dbruhn You have servers at 3.4, and clients at 3.4.1, might as well take that unknown out of the question
18:50 dbruhn 3.4 was the first version of Gluster to support mismatched client connections, it's bound to not be perfect
18:50 neofob left #gluster
18:50 FilipeCifali hmm, ok
18:50 FilipeCifali let's try this
18:52 neofob joined #gluster
19:00 _pol joined #gluster
19:01 FilipeCifali is worth using 3.4 atm?
19:01 FilipeCifali maybe I should go for 3.3 if it's more stable
19:02 dbruhn 3.3.2 is pretty rock solid, 3.4.1 is better that 3.4.0
19:05 FilipeCifali gonna try 3.4.1 then switch to 3.3.2 if this persists
19:05 FilipeCifali hope it doesn't
19:06 lpabon joined #gluster
19:06 dbruhn Someone else was having some weird PHP related issues the other day on 3.4.1, he didn't have those issues on 3.2.7, it was related to php trying to stat a file.
19:09 aliguori joined #gluster
19:11 FilipeCifali oh damn already this late, I have to go home, gonna leave emerge --sync --quiet and shutdown this baby
19:11 FilipeCifali thanks for the attention anyway, let's see how things handle in the same version :)
19:11 FilipeCifali see ya
19:11 dbruhn It at least removed one unknown
19:11 dbruhn good luck
19:28 unit3 joined #gluster
19:30 unit3 Just trying out gluster, I've got 4 identical bare metal boxes, is it worth doing striping + replication, or should I just stick to distribution + replication? It's gonna be a mix of files ~1MiB to ~4 GiB.
19:33 chirino joined #gluster
19:33 Gilbs1 left #gluster
19:42 _pol joined #gluster
19:43 NigeyUK quick Q .. is there a paramater to add to the /etc/fstab to tell fstab if gfs01 is down try gfs02 instead? i think i read about it somewhere earlier today but i cannot find it...
19:46 dbruhn unit3, striping is pretty costly
19:47 dbruhn you'll probably want to stay with distributed
19:48 dbruhn here is a good article on it
19:48 dbruhn http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/
19:50 unit3 dbruhn: that's what I was guessing, that it doesn't make sense unless you have a much bigger pool, and uniformly larger files (like block volumes for VMs or whatever).
19:51 unit3 Thanks for the article link though, very informative.
19:53 Alpinist joined #gluster
19:56 FooBar unit3: I wouldn't bother with the striping
19:57 unit3 Yep, that seems to be the consensus. I just did a quick test with striped and non-striped, and there was no measurable performance advantage on my cluster to striping anyway.
19:57 FooBar NigeyUK: I just use a round-robin dns name as gluster mountpoint ;)
19:57 unit3 NigeyUK: I'd imagine something like ctdb could automate a roaming IP for this purpose, as well.
19:57 dbruhn FooBar, if he gets a fstab miss with rrdns, it doesn't fix the miss at boot and will require a manual mount.
19:58 unit3 ctdb should work, and then you can use it for other services that sit on top of gluster, like samba.
19:58 unit3 which is something I'm planning to test once I've got gluster working. ;)
19:58 sac`away joined #gluster
19:58 unit3 annecdotally, gluster seems waaaaay faster than cephfs on the same hardware, which is interesting to me.
19:58 NigeyUK ah oki, thanks, will look into ctdb :)
19:59 NigeyUK had an awful time testing this on AWS :/
19:59 _pol_ joined #gluster
20:02 NigeyUK ach dammit
20:02 NigeyUK EC2 (Amazon Elastic Compute Cloud) does not support VIPs, hence CTDB is not supported with Red Hat Storage for Amazon Web Services.
20:02 radez johnmark: fyi, we were wondering about if you should put all the nodes in a cluster in the shares config file for openstack...
20:02 radez eharney | radez: no, just use one and the backupvolfile-server option
20:04 _pol joined #gluster
20:04 chirino joined #gluster
20:04 unit3 NigeyUK: that's gonna sort of rule out the kind of failover you're looking for then, I think. you might need to script a mounting script that tries your servers in order until they succeed then.
20:05 unit3 should only be able 4 lines of bash, so it's not the end of the world. ;)
20:06 dbruhn unit3, that's actually not a bad idea, i wonder if the mount code could be changed to be able to have a first target, and a second target if the first fails...
20:07 NigeyUK sorry back, yup, this'll have to be a bodge job, and the boss wonders why im so against chucking all their services "in the cloud" grrr
20:07 wica joined #gluster
20:08 _pol_ joined #gluster
20:10 clag___ joined #gluster
20:10 unit3 dbruhn: yeah, ideally it'd just live in the gluster mount code somehow, but in lieu of that, a bash loop until successful mount is pretty easy to write. ;)
20:11 davidbierce joined #gluster
20:11 unit3 NigeyUK: no, don't you understand, the cloud is magic fairy dust that fixes all problems and never causes issues. ^^
20:12 NigeyUK hahaha i wont bore you with the things ive battled with today then :p
20:12 rwheeler joined #gluster
20:14 FooBar I'm just glad all my servers are not in the cloud :) ... or if cloud, our own ;)
20:15 FooBar and stuff like gluster... i'd like non-virtual
20:17 neofob joined #gluster
20:21 [o__o] left #gluster
20:24 [o__o] joined #gluster
20:27 NigeyUK FooBar, i agree 100% i mean, if a server we have now goes titsup, i just login to the drac via vpn and i can fix most issues, aws = no serial, have to unmount the ebs volume, remount it in a tmp instance, then try to troubleshoot, it's crazy.
20:27 FooBar yeah, and no expectation of 'steady' performance
20:28 FooBar also, for large storage it's quite expensive... especially if you need your systems and storage 24/7 anyway
20:28 FooBar which you mostly do
20:29 neofob left #gluster
20:30 NigeyUK yup, they think it will save money, i giggled, but meh, i just do as i'm told and advise .. no guarantees it will be listened or adhered to.
20:33 bgpepi joined #gluster
20:33 FooBar yuh, $client here keeps asking about cloud stuff, I do a quick calculation, they shriek, and buy new hardware ;)
20:33 ndk joined #gluster
20:36 NigeyUK lol that didnt work for me :(
20:38 Spiculum anyone have any experience bandwidth throttling gluster geo-rep?
20:42 badone joined #gluster
20:44 johnmark NigeyUK: ha, and then you can say 'I told you so?'
20:47 NigeyUK johnmark, that's the plan ;)
21:01 _pol joined #gluster
21:05 sroy_ joined #gluster
21:10 purpleidea Spiculum: i would recommend using shorewall (iptables) to do so.
21:11 tqrst !bug
21:11 * tqrst prods glusterbot
21:11 ccha2 joined #gluster
21:13 FooBar Spiculum: or trickle(d)
21:14 Spiculum i'm using trickle for some rsync
21:14 Spiculum but where do i invoke trickle?
21:15 Spiculum i don't want to throttle all connections, just to the geo-rep slave/host
21:21 Oneiroi joined #gluster
21:25 psyl0n joined #gluster
21:25 psyl0n joined #gluster
21:26 georgeh|workstat joined #gluster
21:26 wushudoin joined #gluster
21:30 wushudoin joined #gluster
21:31 unit3 left #gluster
21:55 plarsen joined #gluster
21:56 georgeh|workstat joined #gluster
21:56 psyl0n joined #gluster
21:59 mattapp__ joined #gluster
22:02 tomased joined #gluster
22:05 jbd1 joined #gluster
22:19 cogsu joined #gluster
22:20 thundara_ joined #gluster
22:23 thundara_ Hi, I have a question with regards to adding a new node to a gluster cluster
22:24 gh5046 joined #gluster
22:24 gh5046 Where can I find documentation for Gluster 3.0?
22:24 gh5046 The links on this page do not work:
22:24 gh5046 http://gluster.org/community/documentation/index.php/Gluster_3.0_Filesystem_Installation_Guide
22:24 mattapp__ joined #gluster
22:25 thundara_ I set up the server with centos, successfully probed / was probed back by the only other node in the cluster, and added the bricks on the system to the volume
22:25 thundara_ But when I went to mount the volume, with mount -t gluster host:/volume folder, I only saw 1 meta file in the folder
22:25 thundara_ -rw-r--r--. 1 1001 1001 32381 Nov 24 15:08 FE71EA163D1F08941A52F5106B3381DB92C82A51.meta
22:26 thundara_ Am I missing something? Are the files not copied over to the new bricks upon adding to the cluster?
22:28 thundara_ The bricks didn't have any striping / replication options set when I added them, but wouldn't think I would need that to see all the files in the volume when it is mounted
22:38 mattapp__ joined #gluster
22:43 dbruhn You need to rebalance the system
22:46 mattapp__ joined #gluster
22:46 dbruhn gh5046, why are you running 3.0?
22:46 gh5046 Old system
22:46 gdubreui joined #gluster
22:47 dbruhn thundara_, you need to rebalance the system per the documentation before it will start writing to the new location.
22:48 tqrst or fix-layout at the very least
22:50 dbruhn gh5046, here is a link to the 3.1 documentation
22:50 dbruhn http://www.gluster.org/community/documentation/index.php/Gluster_3.1_Filesystem_Administration_Guide
22:50 dbruhn http://www.gluster.org/community/documentation/index.php/Gluster_3.1.x_Documentation
22:50 _pol joined #gluster
22:50 gh5046 Is it similar to 3.0?
22:51 dbruhn It's as close as I can find
22:51 _pol joined #gluster
22:51 gh5046 Thank you very, very much.
22:51 dbruhn Are you able to upgrade the system?
22:52 tqrst fwiw, 3.4.1 is much, much more stable than 3.{0..3} was for me on centos
22:52 thundara_ Can I set a brick to replica / stripe without removing and re-adding?
22:52 dbruhn yeah, even 3.2.7 or 3.3.2 would be a huge improvement, 3.0 didn't even have centralized management
22:54 tqrst rebalance still leaks like crazy, but at least nothing segfaults when cancelling it ;)
22:58 rotbeard joined #gluster
23:00 mattapp__ joined #gluster
23:02 mattapp__ joined #gluster
23:03 dbruhn left #gluster
23:04 andreask joined #gluster
23:06 jag3773 joined #gluster
23:07 g4rlic joined #gluster
23:07 druonysus joined #gluster
23:08 gh5046 dbruhn: We will be at some point.  The system is running fine at the moment, I'm just trying to find information about a specific parameter.
23:08 gh5046 Thank you for your help
23:09 gh5046 left #gluster
23:11 g4rlic question.  Trying to create a volume via gluster volume create.  It fails with: "volume create <volname>: failed." and that's it.
23:11 g4rlic I'm running glusterfs 3.4.1 from gluster's repository.
23:13 g4rlic the error in question: [$today] E [glusterd-utils.c:329:glusterd_lock] 0-management: Unable to get lock for uuid: $UUID, lock held by: $UUID
23:14 g4rlic $UUID is the same in both instances.
23:14 g4rlic Truly, I am puzzled.  The logs don't seem to give much of a hint.
23:26 elyograg joined #gluster
23:28 thundara_ Is it possible to remove a brick on a distributed volume without data loss? (That is, rebalance off of it, then remove it)
23:34 glusterbot <http://goo.gl/PEBQU> (at joejulian.name)
23:40 msolo joined #gluster
23:45 glusterbot hagarth: I do not know about 'ext3', but I do know about these similar topics: 'ext4'
23:45 glusterbot hagarth: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin.
23:45 glusterbot samppah: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
23:45 glusterbot T0aD: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
23:45 glusterbot T0aD: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
23:49 g4rlic Ignore me.
23:49 g4rlic SElinux was blocking *something*.
23:49 g4rlic You'd think with a product owned by Redhat, SELinux support would be working. ;)  Did I miss something in the docs?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary