Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 gildub joined #gluster
00:24 gildub joined #gluster
00:27 Ark joined #gluster
00:29 firemanxbr joined #gluster
00:33 _pol joined #gluster
00:59 clickclack joined #gluster
01:15 bala joined #gluster
01:15 mjsmith2 joined #gluster
01:21 hchiramm_ joined #gluster
01:23 mjsmith2 joined #gluster
01:35 k3rmat joined #gluster
01:40 hchiramm_ joined #gluster
01:44 sjm joined #gluster
01:44 recidive joined #gluster
01:44 sjm left #gluster
01:44 sjm joined #gluster
02:30 hagarth joined #gluster
02:31 firemanxbr joined #gluster
02:58 bharata-rao joined #gluster
02:59 saurabh joined #gluster
03:13 kkeithley1 joined #gluster
03:22 clickclack left #gluster
03:25 hagarth joined #gluster
03:27 vimal joined #gluster
03:35 itisravi joined #gluster
03:51 _pol joined #gluster
04:00 dusmant joined #gluster
04:04 glusterbot New news from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
04:15 vpshastry joined #gluster
04:15 shubhendu joined #gluster
04:17 nishanth joined #gluster
04:20 jrcresawn joined #gluster
04:24 spandit joined #gluster
04:25 kanagaraj joined #gluster
04:28 kkeithley1 joined #gluster
04:32 psharma joined #gluster
04:33 ppai joined #gluster
04:35 ngoswami joined #gluster
04:35 mjsmith2 joined #gluster
04:37 mjsmith2_ joined #gluster
04:43 hagarth joined #gluster
04:44 Ark joined #gluster
04:47 JoeJulian hagarth: When a group of files are accessed after a disconnection on a replicated volume, background self-heals are triggered up to the background-self-heal count. After that, they're healed in the foreground blocking operations on that file until the heal is completed. Why? Since we have a self-heal daemon, it's no longer critical that the heal happen upon access. If it cannot be backgrounded, how hard would it be to just skip it and just use
04:47 JoeJulian the sane brick until the file is finally clean?
04:47 ndarshan joined #gluster
04:49 ramteid joined #gluster
04:55 deepakcs joined #gluster
04:58 Matthaeus joined #gluster
04:58 davinder6 joined #gluster
04:59 meghanam joined #gluster
05:01 hagarth JoeJulian: afrv2 moves self healing to daemons. self-healing upon access does not happen anymore.
05:01 kdhananjay joined #gluster
05:02 JoeJulian That's 3.6, right?
05:03 JoeJulian Or did it all make it into 3.5?
05:03 JoeJulian bug 1021686
05:03 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1021686 unspecified, unspecified, ---, aavati, NEW , refactor AFR module
05:04 hagarth JoeJulian: that is 3.6
05:04 JoeJulian Crap. I need a fix now.
05:05 kkeithley1 joined #gluster
05:09 vpshastry joined #gluster
05:10 hagarth JoeJulian: turn off self-healing from the clients?
05:11 JoeJulian Yeah, I guess that's what I'll have to do.
05:13 JoeJulian I wonder if that could avoid bug 1089758 too.
05:13 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1089758 high, unspecified, ---, pkarampu, ASSIGNED , KVM+Qemu + libgfapi: problem dealing with failover of replica bricks causing disk corruption and vm failure.
05:13 spandit_ joined #gluster
05:13 firemanxbr joined #gluster
05:14 JoeJulian Thanks hagarth. I think I'll head to bed. I think I'm spinning up 2x5PB tomorrow.
05:17 kumar joined #gluster
05:17 haomaiwa_ joined #gluster
05:19 kshlm joined #gluster
05:20 hagarth JoeJulian: np, good luck with that.
05:21 bala joined #gluster
05:26 bala joined #gluster
05:30 sjm left #gluster
05:32 haomaiwa_ joined #gluster
05:35 dusmant joined #gluster
05:36 kkeithley1 joined #gluster
05:38 mjsmith2 joined #gluster
05:44 spiekey joined #gluster
05:45 sputnik13 joined #gluster
05:46 lalatenduM joined #gluster
05:46 aravindavk joined #gluster
05:52 rjoseph joined #gluster
05:55 rastar joined #gluster
06:01 meghanam joined #gluster
06:05 ramteid joined #gluster
06:05 ricky-ti1 joined #gluster
06:05 raghu joined #gluster
06:09 vimal joined #gluster
06:13 davinder7 joined #gluster
06:14 mbukatov joined #gluster
06:34 VerboEse joined #gluster
06:38 mjsmith2 joined #gluster
06:40 ctria joined #gluster
06:50 aravindavk joined #gluster
06:53 lezo joined #gluster
06:55 hagarth joined #gluster
06:55 _abhi joined #gluster
07:02 eseyman joined #gluster
07:03 nshaikh joined #gluster
07:13 ilbot3 joined #gluster
07:13 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
07:14 ppai joined #gluster
07:15 haomai___ joined #gluster
07:16 ProT-0-TypE joined #gluster
07:28 spiekey Hello!
07:28 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:28 spiekey is my replication gluster healthy or broken? http://fpaste.org/106732/14017804/
07:28 glusterbot Title: #106732 Fedora Project Pastebin (at fpaste.org)
07:28 spiekey i am getting some error:  E [afr-self-heal-common.c:233:a​fr_sh_print_split_brain_log] 0-testvol-replicate-0: Unable to self-heal contents of '<gfid:0b58ac4c-7363-47f5-b8ef-13c8f895543e>' (possible split-brain). Please delete the file from all but the preferred subvolume.- Pending matrix:  [ [ 0 3 ] [ 5 0 ] ]
07:33 ppai joined #gluster
07:33 dusmant joined #gluster
07:34 ngoswami joined #gluster
07:35 ktosiek joined #gluster
07:35 glusterbot New news from newglusterbugs: [Bug 1094815] [FEAT]: User Serviceable Snapshot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1094815>
07:35 edward1 joined #gluster
07:36 _abhi my application is reporting 0 bytes when immediately reding a file after writing to it
07:36 _abhi I am running 3.5
07:37 _abhi If I however write to a simple NFS mount, it works fine
07:37 _abhi my application runs on windows
07:37 _abhi 2012
07:38 _abhi JoeJulian: can you help me with this?
07:38 mjsmith2 joined #gluster
07:40 mjsmith2_ joined #gluster
07:43 ktosiek_ joined #gluster
07:44 fsimonce joined #gluster
08:03 monotek my system is ubuntu 12.04 with glusterfs 3.4.3 from semiosis ppa.
08:03 monotek after restarting my whole gluster because of a power failure i have some strange behaviour. evrything seems to work for the clients but i have errors in the logs regarding xattr and all new files needs self heal which seems not to work.
08:03 monotek these are the complete logs of client & server while creating 1 new file named "testfile": http://paste.ubuntu.com/7572887/
08:03 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
08:08 andreask joined #gluster
08:20 spiekey its pretty dead here, ey?
08:25 monotek spiekey: all deamons should be online...
08:28 ppai joined #gluster
08:34 liquidat joined #gluster
08:38 mjsmith2 joined #gluster
08:54 bharata-rao joined #gluster
08:57 rastar joined #gluster
08:58 dusmant joined #gluster
09:00 _pol joined #gluster
09:04 vpshastry joined #gluster
09:11 dusmant joined #gluster
09:14 aravindavk joined #gluster
09:20 hagarth joined #gluster
09:21 Thilam Hello, I've a problem with gluster and group come from an LDAP server
09:22 Thilam it seems gluster doesn't take into account groups with a new gid
09:22 Thilam or a to big gid
09:22 gildub joined #gluster
09:22 Thilam it is explain on this thread : http://supercolony.gluster.org/pipermail​/gluster-users/2012-November/034883.html
09:22 glusterbot Title: [Gluster-users] gluster or fuse and group rights (at supercolony.gluster.org)
09:22 Thilam but there is no answer
09:23 bala joined #gluster
09:23 Thilam does someone have an idea ?
09:35 kumar joined #gluster
09:44 monotek seems my problem was gluster 3.50 client and gluster 3.4.3 server. i still have the REMOVEXATTR warnings in my log but the self heal issue has gone after using gluster 2.4.2 client again..
09:44 monotek 3.4.2 client
09:47 Thilam I found this thread : http://supercolony.gluster.org/piperma​il/gluster-users/2014-May/040323.html
09:47 glusterbot Title: [Gluster-users] 32 group limit (at supercolony.gluster.org)
09:47 spandit_ joined #gluster
09:47 Thilam which indicates users can only belong to a maximum of 32 groups
09:48 Thilam it seems to be a "fuse hardcoded" restriction
09:48 Thilam do you have an idea on how to bypass this restriction in 3.5 version?
09:48 spandit joined #gluster
09:52 ndevos Thilam: http://thread.gmane.org/gmane.com​p.file-systems.gluster.devel/6180 explains a solution that will be in 3.6, and hopefully in 3.5.1
09:52 glusterbot Title: Gmane Loom (at thread.gmane.org)
09:53 Thilam ndevos, this indicates it will solve the 93 groups limit, but on my side, I'm stuck with a 32 gruop limit
09:54 Thilam is it the same problem ?
09:54 ndevos Thilam: yes, btu the solution is the same, the nfs-server is a gluster client, just like a fuse-mount
09:54 Thilam ok
09:56 vpshastry joined #gluster
09:56 Thilam is it so rare to overpassed this limit ?
10:00 ndevos not really, it's pretty common, but nobody seems to have complained about it before :-/
10:01 lalatenduM joined #gluster
10:02 Thilam it will deffer my deployment in production, most of my users belong to more than 32 groups
10:02 Thilam btw, thx for your answer
10:03 kkeithley_ a few people have complained but for 99+% it's apparently okay
10:05 hagarth joined #gluster
10:05 edward1 joined #gluster
10:06 ndevos it's probably part of gluster becoming more mature and more used in enterprises, small deployments mostly have fewer groups too
10:06 bala joined #gluster
10:07 aravindavk joined #gluster
10:21 ProT-0-TypE joined #gluster
10:25 spiekey can someone point me to a tutorial how to solve split brains?  i have: http://fpaste.org/106766/17910971/
10:25 spandit_ joined #gluster
10:25 glusterbot Title: #106766 Fedora Project Pastebin (at fpaste.org)
10:25 spiekey google turns up a few but all seems diffrent
10:25 spandit joined #gluster
10:25 jag3773 joined #gluster
10:34 kaushal_ joined #gluster
10:38 qdk_ joined #gluster
10:38 mjsmith2 joined #gluster
10:41 Philambdo joined #gluster
10:42 calum_ joined #gluster
10:52 hagarth joined #gluster
10:52 bnh2 joined #gluster
10:52 bnh2 Why glusterFS client uses Fuse and not NFS when NFS clearly preforms better in read/write??
10:59 Peanut bnh2: that depends on your access pattern, for small read/write NFS wins, but for large file operations, native gluster performs better.
11:01 _pol joined #gluster
11:07 _abhi Peanut: my windows app reads a 0 byte file immediately after it write a 10~100 Mb file
11:07 _abhi I am running gluster 3.5
11:08 _abhi however, if I mount a simple NFS mount in sync mode, everything works fine
11:08 _abhi what options can I configure to mitigate this
11:09 _abhi Peanut: http://paste.ubuntu.com/7579612/
11:09 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
11:09 bene2 joined #gluster
11:12 karnan joined #gluster
11:12 nshaikh joined #gluster
11:16 bene2 joined #gluster
11:18 jag3773 joined #gluster
11:20 shubhendu joined #gluster
11:23 Norky _abhi, how is the Windows application accessing the Gluster volume?
11:23 dusmant joined #gluster
11:29 spandit joined #gluster
11:29 ndarshan joined #gluster
11:29 spandit_ joined #gluster
11:29 _abhi Norky: hey, windows comes with inbult NFS client support
11:30 _abhi so using an UNC path like \\10.x.x.x\storage\...
11:30 _abhi I can also mount it as a drive
11:30 _abhi Norky:  did you look at my gluster options. Is there anything I can change further
11:31 kshlm joined #gluster
11:32 Norky you might have better luck using Samba with the libgfapi component
11:32 nishanth joined #gluster
11:33 _abhi Norky: does redhat support windows for gluster
11:33 Norky also, does Windows' native NFS client support the sync option?
11:34 Norky Red Hat? So you're using Red Hat Storage?
11:34 _abhi Norky: isn’t red hat storage the same as gluster
11:34 _abhi with support
11:34 Norky yes, kinda
11:34 _abhi and all the gluster devs are red hat emplyees
11:35 _abhi any idea if I can get a developer on this issue ?
11:35 Norky Red Hat Storage is a slightly cut-down RHEL appliance with a particular version of Gluster - more heavily tested but lagging behind 'community' Gluster slightly
11:35 _abhi Norky: here’s one such discussion http://social.technet.microsoft.com/forums​/windows/en-US/8eb5837f-618a-477a-8fcd-340​dfbd42372/nfs-client-performance-readahead​-writebehind-and-registry-key-definitions
11:35 glusterbot Title: NFS client performance, read-ahead, write-behind and registry key definitions (at social.technet.microsoft.com)
11:36 Norky if you have bought Red Hat Storage then you should be entitled to commercial support from Red Hat - open a case with them
11:37 _abhi Norky: I am using gluster not RHS but I thought if you knew anyone fro the dev team to look into this
11:37 _abhi Norky: I also set the client nfs cache settings but that does not seem to help
11:38 mjsmith2 joined #gluster
11:39 diegows joined #gluster
11:40 Norky if you are not using RHS then your question " does redhat support windows for gluster" is irrelevant
11:41 Norky in fact RHS comes with Samba and automatically configures it for all Gluster volumes, so it certainly works
11:43 Norky whether you are using RHS or community Gluster, I would suggest trying Samba for Windows clients
11:43 andreask joined #gluster
11:44 davinder8 joined #gluster
11:45 _abhi Norky: that would require me to set up more machines that mount gluster and re export it via SMB, correct
11:45 Norky no
11:45 Norky you can run Samba on the Gluster servers
11:47 Norky in fact I *think* you have to do it that way if you're using libgfapi
11:47 Norky what kind of volume are you using?
11:47 hchiramm_ joined #gluster
11:47 Norky distributed? replicated? how many?
11:48 _abhi Norky: distributd replicated 4 machines 8 bricks 4 TB
11:49 Norky have you checked that the file is written to both (assuming 2) replicas before trying to read it?
11:50 Norky I am not 100% certain of this but, with NFS, the client writes data to the Gluster daemon on one server, that one server must then propagate it to other servers as appropriate
11:51 bala joined #gluster
11:51 _abhi Norky: my application is oblivious of the fact that it is wriging to a network storage. It assumes it is a locally attched disk. Plus, large files(100MB+) don’t exhibit this. ONly small files(~20-30)MB
11:51 Norky with 'native' FUSE gluster, the client will be connected to all servers, and will write data directly to all appropriate servers
11:51 _abhi Norky: that is what the design docs say as well
11:51 _abhi Norky: my app does not run on linux yet
11:52 _abhi it is not QAed by our team that is
11:52 _abhi it is a java app though
11:52 Norky many applications neither know nor care what the storage is, only that they can open()/fopen()/write()/whatever to some file handle
11:54 Norky so you can at least try to replicate this problem with the same application on a Linux client using both NFS and 'native' Gluster client?
11:55 Norky we have a customer running a large gluster volume as their main general purpose filesystem which they access using a mix of Windows and Linux clients
11:56 Norky RHS. They use Samba for all the Windows clients
11:56 Norky there were a few teething troubles until RH updated the default Samba options, now every Windows program (bar one which is known to be a problem everywhere) works fine
12:00 itisravi joined #gluster
12:01 _abhi Norky: let me give the samba option a try
12:05 ndarshan joined #gluster
12:11 nishanth joined #gluster
12:11 shubhendu joined #gluster
12:12 dusmant joined #gluster
12:26 mjsmith2 joined #gluster
12:26 Ark joined #gluster
12:32 plarsen joined #gluster
12:43 hagarth joined #gluster
12:51 tdasilva joined #gluster
12:52 sroy_ joined #gluster
12:53 davinder8 joined #gluster
12:56 firemanxbr joined #gluster
12:57 sroy_ joined #gluster
13:14 Slashman joined #gluster
13:15 firemanxbr joined #gluster
13:17 japuzzo joined #gluster
13:21 dusmant joined #gluster
13:21 _abhi Norky: seems to work with CIFS
13:21 _abhi thanks a lot for your help
13:28 recidive joined #gluster
13:33 mjsmith2 joined #gluster
13:36 jmarley joined #gluster
13:36 jmarley joined #gluster
13:38 jdarcy joined #gluster
13:42 hagarth joined #gluster
13:46 Philambdo joined #gluster
13:51 ricky-ticky1 joined #gluster
13:53 daMaestro joined #gluster
13:55 JoeJulian Norky: reading through the scrollback, you can now be 100% certain.
13:56 JoeJulian _abhi: Why not make your java app use libgfapi and avoid the whole samba/nfs thing?
13:58 _abhi JoeJulian: Ohh man! I will have to put up a fight with the entire prduct mgmt for that. Not gonna work :)
13:59 hagarth _abhi: let us know if you need some backing in your fight with your product management ;)
14:02 JoeJulian Seems like an easy fight to me. Change the filename/path and avoid several layers of abstraction and performance degredation.
14:03 tdasilva left #gluster
14:03 _abhi JoeJulian: We don not have any customers who use RHS. We use gluster in our cloud deployments untll we have an S3 integration in place.
14:05 jobewan joined #gluster
14:06 JoeJulian _abhi: irrelevant
14:08 coredump joined #gluster
14:09 mortuar joined #gluster
14:12 sroy_ joined #gluster
14:12 lmickh joined #gluster
14:17 bennyturns joined #gluster
14:19 wushudoin joined #gluster
14:23 haomaiwa_ joined #gluster
14:25 Intensity joined #gluster
14:26 gmcwhistler joined #gluster
14:27 gmcwhistler joined #gluster
14:28 saurabh joined #gluster
14:32 gmcwhist_ joined #gluster
14:39 haomaiw__ joined #gluster
14:41 theron joined #gluster
14:47 ndk joined #gluster
14:51 sroy_ joined #gluster
14:53 nage joined #gluster
15:01 rwheeler joined #gluster
15:04 Ark joined #gluster
15:05 deepakcs joined #gluster
15:06 firemanxbr joined #gluster
15:11 ccha2 ndevos: about manage-gids,I custom patch it with GD_OP_VERSION_MAX at 4 and create rpms, with glusterfs client 3.5.0 and aux groups work too... is it normal ?
15:14 sputnik13 joined #gluster
15:18 ccha2 oh you patch for client.c is for not sending... so I think with client 3.5.0, gids from clients are ignored but it works, right ?
15:19 davinder8 joined #gluster
15:19 Norky JoeJulian, thank you for the confirmation
15:21 ghenry joined #gluster
15:23 jag3773 joined #gluster
15:26 Thilam|work joined #gluster
15:29 Thilam joined #gluster
15:29 theron joined #gluster
15:29 sprachgenerator joined #gluster
15:33 ndevos ccha2: yeah, it ignores the groups the client sends, and setting GD_OP_VERSION_MAX to 4 in an additional patch to 3.5.1 should indeed work
15:34 ndevos ccha2: KP sent a patch to increase the op-version in 3.6 to 360, if that gets accepted we can make the op-version in 3.5.1 to 351 and a next beta/release should work correctly
15:35 aravindavk joined #gluster
15:35 Thilam joined #gluster
15:37 ndevos correction, not KP, but Kaushal M - http://review.gluster.org/7963
15:37 glusterbot Title: Gerrit Code Review (at review.gluster.org)
15:37 ccha2 yes I saw that
15:37 ccha2 do you knwo when 3.5.1 will release ?
15:38 Matthaeus joined #gluster
15:39 ndevos no, I've not got many reports of testers... maybe its good except for the op-version issue?
15:48 pdrakeweb joined #gluster
15:54 mortuar joined #gluster
16:01 Pupeno joined #gluster
16:03 morse joined #gluster
16:05 primechuck joined #gluster
16:09 haomaiwang joined #gluster
16:10 andreask joined #gluster
16:11 Ark joined #gluster
16:12 lalatenduM joined #gluster
16:13 raghu` joined #gluster
16:15 Ark joined #gluster
16:22 Mo_ joined #gluster
16:25 _Bryan_ joined #gluster
16:26 vpshastry joined #gluster
16:35 haomaiwa_ joined #gluster
16:42 jbd1 joined #gluster
16:44 haomai___ joined #gluster
16:50 sjm joined #gluster
16:57 kumar joined #gluster
16:59 ramteid joined #gluster
17:04 vpshastry joined #gluster
17:11 sjusthome joined #gluster
17:11 kmeek joined #gluster
17:11 sputnik13 joined #gluster
17:11 kmeek I'm trying to create a replicated volume -- but keep geting this error  0-management: Stage failed on operation 'Volume Create', Status : -1
17:13 kmeek Actually the eror just says it failed with no explanation.  That message is in the peer log.
17:13 kmeek Command I'm using is: sudo gluster volume create repl-vol  replica 2 vjay:/big1/timemachine myth07:/timemachine
17:19 dberry joined #gluster
17:22 semiosis maybe try without a hyphen in the volume name?
17:22 semiosis also you should try to be consistent with your brick names, so paths are the same for the same file on both replicas
17:23 semiosis for example gluster volume create myreplvol server1:/path/to/brickA server2:/path/to/brickA
17:23 semiosis then any file in the volume will have the same path on both servers, /path/to/brickA/the/file
17:25 semiosis kmeek: ^^
17:27 kmeek OK -- I'll give those ideas a try -- thanks
17:27 semiosis yw
17:27 pdrakeweb joined #gluster
17:30 kmeek Got the same error using this: sudo gluster volume create testvol  replica 2 vjay:/big1/testvol myth07:/big1/testvol
17:30 kmeek I don't have dedicated partitions for the bricks  -- is that a problem
17:31 kmeek I'm just creating directories on existing partitions to try and test things out.
17:31 semiosis might cause a warning message but should still work ok
17:31 semiosis please put the etc-glusterfs-glusterd.log files from both servers on pastie.org & give the link
17:31 semiosis those are in /var/log/glusterfs
17:31 semiosis or something like that
17:32 semiosis also, what version of glusterfs?  what linux distro version?
17:32 semiosis and are those the exact same on both machines?
17:32 kmeek glusterfs 3.4.2 built on Jan 14 2014 18:05:37
17:32 kmeek Ubuntu 14.04
17:33 semiosis well fwiw you might want to use the latest version from the ,,(ppa)
17:33 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
17:33 semiosis although 3.4.2 in trusty was a very solid release
17:33 semiosis it's no longer current
17:35 semiosis but that shouldn't be causing this problem
17:35 kmeek http://pastie.org/private/sovkninnckx9pagpfjpmqq
17:35 glusterbot Title: Private Paste - Pastie (at pastie.org)
17:35 hagarth joined #gluster
17:42 _pol joined #gluster
17:44 kmeek I'll try upgradingexit
17:45 giannello joined #gluster
17:49 rotbeard joined #gluster
17:52 kmeek I upgraded to 3.5 using your PPA -- thanks
17:52 kmeek It now gives better error message: sudo gluster volume create testvol1  replica 2 vjay:/big1/testvol1 myth07:/big1/testvol1
17:52 kmeek volume create: testvol1: failed: Staging failed on myth07. Error: The brick myth07:/big1/testvol1 is is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
17:53 kmeek and if I do force it works.
17:53 semiosis hmmm, i wonder if that was causing the problem on 3.4.2
17:58 giannello kmeek, as the error says, you should avoid placing a brick in your root partition
17:59 giannello create another partition, use LVM, whatever, but don't use your root partition
18:01 Ark joined #gluster
18:08 ctria joined #gluster
18:11 cdunda joined #gluster
18:11 cdunda Hello
18:11 _pol joined #gluster
18:11 glusterbot cdunda: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:13 cdunda ok so i made a mess of our gluster set up. I have appx 20 volumes each with 2 bricks. The bricks are configured by ip address. We are on EC2 and needed to resize our servers causing our two gluster servers to change their ip addresses.
18:15 cdunda since each brick is configured via the non existant ip everything is broken. How can I add new bricks and replicate the data? Going forward we are going to changed to using hostnames. WE are on ubuntu 12.04
18:19 cdunda or how can i chnange the current bricks to use the new ip address or hostname?
18:24 Ark joined #gluster
18:27 spiekey joined #gluster
18:32 semiosis cdunda: probably need to do a search & replace on your volfiles, while all gluster processes are stopped/killed
18:33 semiosis thats in /var/lib/glusterd on your servers
18:34 semiosis and for hostnames I strongly recommend making dedicated gluster hostnames (gluster1.your.domain for example) and CNAMEing those to the public-hostname of the gluster servers in EC2
18:34 cdunda semiosis: Thank you. I noticed that ip strewn all throughout the volfiles and amongst other files... I'll give it a shot
18:35 _pol joined #gluster
18:38 hagarth joined #gluster
18:43 Ark_ joined #gluster
18:58 lalatenduM joined #gluster
19:13 edward1 joined #gluster
19:24 _dist joined #gluster
19:25 spiekey joined #gluster
19:34 ndk` joined #gluster
19:37 Matthaeus joined #gluster
19:43 Philambdo joined #gluster
19:48 qdk_ joined #gluster
19:49 milu joined #gluster
19:50 milu hi All!
19:50 milu is snapshot really available on 3.5?
19:50 milu CLI shows nothing about snapshots...
19:50 milu even when in code you can see something related to snaps
19:50 maduser joined #gluster
20:00 gildub joined #gluster
20:10 recidive joined #gluster
20:30 markd_ joined #gluster
20:31 japuzzo joined #gluster
20:36 rwheeler joined #gluster
20:40 mdavidson joined #gluster
20:41 markd_ joined #gluster
20:43 in joined #gluster
20:44 Matthaeus joined #gluster
20:44 calum_ joined #gluster
21:09 qdk_ joined #gluster
21:13 johnmark joined #gluster
21:17 andreask joined #gluster
21:21 bene2 joined #gluster
21:22 JoeJulian semiosis... I'm using your ppa... (sigh)
21:23 semiosis LMAO!
21:23 semiosis they're an ubuntu shop?
21:23 JoeJulian yeah
21:23 JoeJulian and nobody's happy about it.
21:24 semiosis nobody BUT ME
21:24 semiosis largest gluster deploy uses my packages :D
21:25 JoeJulian There's a feather for your cap.
21:25 semiosis hey, do you use IPoIB?
21:25 semiosis someone reported last week that the IPoIB stuff in ubuntu causes gluster mounts to fail at boot
21:25 semiosis maybe we can work together to get that bug fixed
21:26 semiosis s/mounts/IPoIB mounts/
21:26 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
21:26 semiosis glusterbot: meh
21:26 glusterbot semiosis: I'm not happy about it either
21:26 badone joined #gluster
21:29 bene2 semiosis, I never had ipoib cause a problem for Gluster on RHEL.   But you have to make sure the ipoib network is really up, that includes subnet manager
21:29 JoeJulian semiosis: nope, 10gig
21:30 semiosis bene2: right, so the way mounts work in ubuntu is pretty unique, and blocking a mount task until "the network is up" takes some care.  since i dont have any IB hardware to test with, it's not surprising that it doesnt block the mount correctly
21:32 semiosis JoeJulian: so, how about doing a case study for gluster.org?
21:32 JoeJulian Eventually... sheesh, I just got here... :P
21:32 semiosis i'll remind you down the road
21:33 _dist semiosis: it's not every install, I never bothered tracking it down because I changed to debian but it was like 50/50 of my ubuntu installs that had the issue
21:33 semiosis _dist: ??? what issue?
21:33 * semiosis lacks context
21:33 _dist the gluster mounts failing at boot
21:33 semiosis with ethernet or IB?
21:34 _dist both honestly, you've never seen it in ethernet?
21:34 semiosis seen it, thought i solved it
21:34 _dist well you probably did, I haven't used ubuntu for gluster in about 3-4 months
21:34 semiosis i felt real confident about the solution i put into the trusty universe packages, the current solution in the ppa packages
21:35 semiosis i thought that nailed it for good (until i found out about the IPoIB issue last week)
21:35 _dist (actually it's not fair to say we don't use it) we do have some using gluster on ubuntu strictly as a client.
21:36 semiosis and how well do the mounts work at boot?
21:36 semiosis mount at boot is strictly a client concern
21:36 _dist always, I actually just rebooted one, but these are over ethernet only
21:37 semiosis ok great
21:37 _dist (sorry for the contradiction, that one I just rebooted is running raring)
21:38 semiosis you really ought to upgrade that.  raring doesnt get any updates anymore
21:39 _dist I know :) it's a whole other issue to do with fears about magic spells
21:39 _dist test upgrade version is running already
21:45 brad[] joined #gluster
21:47 bene2 semiosis, with RHEL it was necessary to add some option like _netdev into /etc/fstab, is there similar thing in Ubuntu?
21:47 semiosis it's quite different
21:47 semiosis although both rhel6 and ubuntu use upstart for init, they handle mounts very differently
21:51 brad[] This will be the FAQ of the decade but I'm looking at using GlusterFS as the backing store for a VM pool. I'm assuming stripe+replicate is the method I'd want to use?
21:51 brad[] I'm not going for stripe for performance reasons, I understand the issues there
21:51 semiosis you probably don't want stripe
21:51 semiosis distributed-replicated is the most common
21:52 brad[] So I'm wondering how GlusterFS handles a VM image of say 100GB in size, with lots of small write operations going on inside it
21:52 brad[] Are the blocks updated individually or does it have to resync the entire thing to the replica?
21:52 semiosis clients write to all replicas at the same time
21:53 _dist brad[]: I'm currently using glusterfs replicate for about 30 VMs in a single volume. Using the non-fuse libgfapi though
21:53 brad[] ah, ok
21:53 semiosis so a client makes writes to bytes in a file, those ops are sent to all the replica bricks at the same time
21:53 brad[] _dist: interesting. What does your layout look like if I may ask? disks/server, is there RAID backing it, etc
21:54 brad[] semiosis: okay, that makes more sense
21:54 brad[] semiosis: and a rebalance would of course copy the entire thing, but that's to be expected
21:54 semiosis if a replica goes down then it will need to be healed, which by default does a diff of the files on the replicas
21:54 semiosis and syncs the different bytes
21:55 semiosis rebalance is not a common operation, it's only needed after doing add-brick to expand a volume.  but there are other ways to expand a volume, and I recommend avoiding rebalance whenever possible
21:55 brad[] semiosis: What if I have 3 200GB bricks and ....dang I can't do the mental math fast enough. Is there a case where a single large file that is NOT larger than a brick would span multiple bricks?
21:55 semiosis i've been running gluster in prod for 3 years, expanding along the way, and never had to to a rabal (fortunately)
21:55 brad[] Nice
21:55 _dist brad[]: my biggest VM is about 2T (oracle). The backing file system is currently zfs raidz3 which is comparable to raid7. The network portion is important, we're using 10Gbe, 4 bonded 1gbe probably would have been enough
21:56 semiosis a file is placed on some brick (or replica set) or another.  the whole file lives there.  if you try to grow a file larger than the brick (or replica set) that contains it, you will fail
21:57 semiosis so a file never spans multiple bricks, unless you use stripe, which you probably shouldn't
21:57 brad[] hah
21:57 brad[] ok
21:58 brad[] am thinking of using 4x4TB enterprise drives in RAID1 as a single brick so I doubt I'd quickly encounter such a scenario
21:58 primechuck joined #gluster
21:58 brad[] I'm assuming the recommendation not to use individual drives but rather have RAID back things still holds true?
21:58 semiosis depends on your needs
21:59 semiosis having more, smaller, bricks per server can increase performance, and reduce time to heal if a brick needs to be replaced
21:59 brad[] _dist: thanks btw
21:59 _dist brad[]: I'd probably use raid 10 instead of 1, unless the write speed of 1 drive is fast enough
21:59 brad[] oh sorry, meant to type RAID10.
22:00 primechuck joined #gluster
22:00 brad[] semiosis: So I was pondering 4 virtual machine servers with 4 drive bays each populated....so one brick per drive should be ok in that case? I imagine if I had a 48 bay chassis things would change
22:01 brad[] I'm looking to avoid the double cost of RAID and Gluster replication if I can
22:02 semiosis it depends on two things imo, 1) how large are the files in your volume, and 2) what kind of single thread performance do you need
22:03 brad[] The biggest virtual disk I've currently got is 500GB in size
22:03 semiosis so you can fit 8 of those on a 4TB drive, roughly
22:03 brad[] Those are the largest contiguous files
22:03 semiosis and you have no control over where gluster stores things.  in large numbers the files tend to be evenly distributed across the bricks (or repl sets)
22:05 semiosis if you only have 12 files, you might not end up with exactly 3 on each of 4 bricks, and even if you do, you might get 3 huge files on one brick and 3 tiny files on another
22:05 semiosis :)
22:05 brad[] okay
22:06 brad[] you mentioned writes happen to all nodes/bricks at once - do reads do same?
22:06 brad[] and does that mean the fastest read is the slowest node?
22:06 semiosis gluster tries to be smart about reads and balance between the replicas
22:06 semiosis i'm not current on exactly how
22:07 semiosis maybe someone else can fill in
22:08 _dist I can say that in my replica setup I definitely get 2x read speed vs write speed (which makes sense)
22:11 brad[] interesting
22:21 sputnik13 joined #gluster
22:27 ira joined #gluster
22:30 Matthaeus joined #gluster
22:59 fidevo joined #gluster
23:11 efries joined #gluster
23:13 iktinos joined #gluster
23:14 borreman_123 joined #gluster
23:16 foobar_ joined #gluster
23:16 the-me_ joined #gluster
23:16 delhage_ joined #gluster
23:16 overclk_ joined #gluster
23:16 xymox_ joined #gluster
23:19 NCommand` joined #gluster
23:19 troj_ joined #gluster
23:20 k3rmat joined #gluster
23:20 masterzen_ joined #gluster
23:20 radez` joined #gluster
23:20 tjikkun joined #gluster
23:20 tomased joined #gluster
23:20 firemanxbr joined #gluster
23:20 tjikkun joined #gluster
23:27 borreman_dk joined #gluster
23:32 pdrakeweb joined #gluster
23:36 sjm joined #gluster
23:38 badone joined #gluster
23:45 badone_ joined #gluster
23:55 badone joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary