Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 TSM so thats the speed of a single link which is fine
00:00 blendedbychris wait huh?
00:00 TSM await is just latency from what i remember
00:01 blendedbychris that should be using the bonded interface
00:02 blendedbychris http://pastie.org/private/x6vvxbneunq71gzgdwhsq
00:02 glusterbot Title: Private Paste - Pastie (at pastie.org)
00:02 TSM does not matter, iperf is prob using a TCP connection to test speed so it will fall under one connection
00:02 blendedbychris oh shit okay you are telling me stuff i don't know
00:02 blendedbychris so i have to use udp?
00:02 blendedbychris I've never used a bonded nic so bare with me
00:03 TSM just run two iperfs that should prob do it i think
00:07 blendedbychris TSM: nah that splits it in half
00:07 blendedbychris fudge
00:07 kminooie joeJulian: so I am mounting this volume on both 32bit and 64 bit systems, can that be the reason? and if so should using ino32 fix this?
00:09 JoeJulian kminooie: I'm not positive, but based on my understanding of what's causing that problem and other bugs that have been reported, I am making an educated guess.
00:09 kminooie ok
00:11 JoeJulian TSM, blendedbychris: I also haven't done Link Aggregation yet, but why would a tcp connection only use one channel if the aggregation happens at layer 2?
00:11 tryggvil joined #gluster
00:12 kminooie ok so this just happened:
00:12 kminooie [root@alan-pc ~]# su - apache
00:12 kminooie -bash-4.2$ cd /srv/storage/
00:12 kminooie -bash-4.2$ ll
00:12 kminooie total 8
00:12 kminooie drwxr-xr-x. 6 alan tape 4096 Nov 15 11:00 adsimages
00:12 kminooie -bash-4.2$ cd adsimages/
00:12 kminooie -bash-4.2$ ll
00:12 kminooie ls: reading directory .: Stale NFS file handle
00:12 kminooie total 16
00:12 kminooie drwxrwxr-x.   3 root   tape 4096 Nov 15 16:14 images
00:12 kminooie drwxr-xr-x. 124 apache root 4096 Nov 19 13:13 offerimages
00:12 JoeJulian Please use pastebins  like fpaste
00:13 HeMan joined #gluster
00:13 JoeJulian Rats
00:14 kminooie sorry http://www.fpaste.org/ji8v/
00:14 glusterbot Title: Viewing Paste #253552 (at www.fpaste.org)
00:14 kminooie it goes stale the moment i try to create something on root of the mount point
00:15 kminooie this has been very common for me so far
00:15 JoeJulian And there's nothing interesting in /var/log/glusterfs/nfs.log on the server that client is connecting to?
00:16 JoeJulian 'cause it sure sounds like split-brain or one other bug that's been reported and is fixed in master.
00:17 kminooie see for yourself http://www.fpaste.org/KvgO/
00:17 glusterbot Title: Viewing Paste #253554 (at www.fpaste.org)
00:20 JoeJulian kminooie: Ok, how about the brick logs around 2012-11-19 16:11:02
00:21 JoeJulian I'm curious about those self-heal operations too, but I'll start here.
00:21 clag_ joined #gluster
00:23 kminooie http://www.fpaste.org/B1It/
00:23 glusterbot Title: Viewing Paste #253555 (at www.fpaste.org)
00:24 blendedbychris semiosis: I thought your upstart fix was supposed to address mounting prior to network interface being available?
00:24 blendedbychris I always get that it cant' be mounted
00:25 nightwalk joined #gluster
00:26 kminooie I would appreciate if u can tell me about any other issues you might have notice in the log files as well. as I am new to gluster and i need to make sure that i can use it safely in a production environment
00:36 semiosis blendedbychris: i've heard that doesnt work with bridged interfaces, maybe your bond is trouble too, idk how that stuff works
00:37 semiosis furthermore, it doesnt actually have to do with interfaces, but instead with glusterd, if glusterd is present on the client system
00:37 semiosis anyway, it's complicated
00:37 blendedbychris ?
00:37 semiosis as for what it was supposed to address, simply mounting from localhost at boot time, in the simplest of circumstances
00:38 semiosis ? indeed
00:38 blendedbychris oh sorry i was mixing the bridged info you gave me
00:38 kminooie ok this is new now even getting an ls would cause it to go stale http://www.fpaste.org/HCsK/
00:38 glusterbot Title: Viewing Paste #253557 (at www.fpaste.org)
00:39 semiosis blendedbychris: i gave you bridged info?  dont recall...
00:39 kminooie the only thing that has changes is that "set nfs.enable-ino32 on "
00:39 blendedbychris ya i mean literally the thing right above what you said heh?
00:39 kminooie also during that last fpaste that volume wasn't mounted anywhere else and nothing in the logfiles
00:40 blendedbychris semiosis: currently my system gets hung up on fstab because it tries to mount too soon glusterfs vols
00:40 semiosis blendedbychris: quick hack fix: add "sleep 5" or something like that to the top of /sbin/mount.glusterfs
00:40 kminooie blendedbychris: user _netdev
00:40 semiosis real hacky
00:40 semiosis blendedbychris: you're on ubuntu right?  using my packages?
00:40 blendedbychris nobootwait?
00:41 kminooie that was :use _netdev
00:41 semiosis kminooie: that doesnt do anything on ubuntu :)
00:41 blendedbychris kminooie: same thing?
00:41 blendedbychris semiosis: yes
00:41 semiosis blendedbychris: nobootwait is a safety valve, the mount can hang but your system will still boot
00:41 kminooie ok didn't know that
00:41 semiosis a good idea, but not your fix
00:42 semiosis kminooie: yeah upstart does things weirdly
00:42 semiosis s/upstart/ubuntu's variant of upstart/
00:42 glusterbot What semiosis meant to say was: kminooie: yeah ubuntu's variant of upstart does things weirdly
00:43 kminooie this glusterbot is really funny :)
00:44 semiosis glusterbot: meh
00:44 glusterbot semiosis: I'm not happy about it either
00:45 JoeJulian ok, I'm back... had a little crisis pop up here.
00:46 semiosis glusterbot is such a hater
00:47 semiosis http://cdn.rackerhacker.com/wp-content/uploads/​2012/02/haters_gonna_hate_elephhant-238x300.jpg
00:47 glusterbot <http://goo.gl/1Achc> (at cdn.rackerhacker.com)
00:47 nightwalk joined #gluster
00:48 kminooie joejulian: so I don't know if you got this but after that set nfs.enable-ino32 on this is the situation http://www.fpaste.org/HCsK/    and the volume is not mounted anywhere else and nothing in the log files
00:48 glusterbot Title: Viewing Paste #253557 (at www.fpaste.org)
00:52 JoeJulian What kernel version on your servers?
00:54 kminooie Linux srv3 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux  (both of them)
00:56 kminooie debian squeeze by the way
00:56 JoeJulian On the servers, check that /srv/gluster/adsimages/.glusterfs/00/00/00*0001 in a symlink.
00:56 JoeJulian s/in/is/
00:56 glusterbot What JoeJulian meant to say was: On the servers, check that /srv/gluster/adsimages/.glusterfs/00/00/00*0001 is a symlink.
00:58 kminooie root@srv3:/srv/gluster/adsimages/.glusterfs/00/00# ll
00:58 kminooie total 0
00:58 kminooie lrwxrwxrwx 1 root root 8 Nov 14 13:52 00000000-0000-0000-0000-000000000001 -> ../../..
00:59 kminooie same thing on the other one
01:00 JoeJulian Dangit! I want to figure this out but I have a train to catch. I'll be back after I get home.
01:01 kminooie by the way i turned the nfs.enable-ino32 off and now it won't go stale after a ls anymore
01:02 kminooie kminooie@gamil.com   email if you think of something thanks.
01:02 kevein joined #gluster
01:02 kminooie that was gmail
01:04 redsolar joined #gluster
01:15 nick5 joined #gluster
01:16 nightwalk joined #gluster
01:20 robo joined #gluster
01:50 t35t0r joined #gluster
01:50 kminooie left #gluster
02:18 nightwalk joined #gluster
02:22 blendedbychris joined #gluster
02:22 blendedbychris joined #gluster
02:23 bharata joined #gluster
02:27 Technicool joined #gluster
02:32 designbybeck joined #gluster
02:49 sunus joined #gluster
03:16 nightwalk joined #gluster
03:48 sripathi joined #gluster
04:04 raghu joined #gluster
04:20 sgowda joined #gluster
04:56 vpshastry joined #gluster
05:03 hagarth joined #gluster
05:09 SpeeR joined #gluster
05:19 bulde joined #gluster
05:21 bharata joined #gluster
05:25 deepakcs joined #gluster
05:31 chacken joined #gluster
05:32 mohankumar joined #gluster
05:56 overclk joined #gluster
06:04 hagarth joined #gluster
06:13 shireesh joined #gluster
06:32 bala1 joined #gluster
06:47 mdarade1 joined #gluster
06:48 nightwalk joined #gluster
06:48 ramkrsna joined #gluster
06:48 ramkrsna joined #gluster
06:54 quillo joined #gluster
06:58 ngoswami joined #gluster
06:59 sripathi1 joined #gluster
07:09 inodb^ joined #gluster
07:33 lkoranda joined #gluster
07:35 rudimeyer joined #gluster
07:38 rudimeyer joined #gluster
07:39 inodb_ joined #gluster
07:45 inodb^ joined #gluster
07:46 quillo joined #gluster
07:47 puebele1 joined #gluster
07:48 ctria joined #gluster
07:49 inodb_ joined #gluster
07:53 quillo joined #gluster
07:53 layer3switch joined #gluster
07:57 glusterbot New news from resolvedglusterbugs: [Bug 832396] nfs server asserted in inode_path since gfid of the inode is NULL <http://goo.gl/5adP5>
08:15 sjoeboo joined #gluster
08:24 sshaaf joined #gluster
08:26 inodb^ joined #gluster
08:28 kleind Hi. Is there a settings that allows for preferred local reads if a server is also a client? I find references to an option "read-subvolume", but cannot find how to set it (if available in 3.3 at all).
08:29 kleind I have a test setup with 4 nodes serving a distributed replicated volume. if server1 mounts the volume and accesses a file, it reads it from server3 and not locally.
08:29 kleind with my current hardware, this is way slower than reading locally.
08:30 morse joined #gluster
08:35 ika2810 joined #gluster
08:35 ika2810 joined #gluster
08:36 ika2810 left #gluster
08:41 andreask joined #gluster
08:42 TheHaven joined #gluster
08:42 dobber joined #gluster
08:58 puebele1 joined #gluster
09:05 Azrael808 joined #gluster
09:11 duerF joined #gluster
09:21 mdarade1 joined #gluster
09:21 ngoswami joined #gluster
09:21 ramkrsna joined #gluster
09:27 sripathi joined #gluster
09:37 16WABCOTJ joined #gluster
09:38 hagarth joined #gluster
09:40 nightwalk joined #gluster
09:45 sripathi joined #gluster
09:52 nueces joined #gluster
09:59 saz_ joined #gluster
10:07 tryggvil joined #gluster
10:09 nightwalk joined #gluster
10:18 pithagorians joined #gluster
10:24 mdarade1 left #gluster
10:28 pithagorians hello all. what would make a GlusterFS partition to unmount itself ?
10:29 ndevos pithagorians: when you use the glusterfs native client (fuse) the volume gets unmounted when the process exits
10:34 tryggvil joined #gluster
10:37 bauruine joined #gluster
10:50 pithagorians the problem it is mounted
10:50 pithagorians it has an entry into /etc/fstab
10:51 pithagorians and even so, sometimes the partition is unmounting
11:00 tjikkun_work joined #gluster
11:13 manik joined #gluster
11:18 flowouffff joined #gluster
11:20 ramkrsna joined #gluster
11:20 ramkrsna joined #gluster
11:28 kspr joined #gluster
11:35 overclk joined #gluster
11:55 pithagorians joined #gluster
11:59 kkeithley1 joined #gluster
12:09 overclk joined #gluster
12:14 shireesh joined #gluster
12:15 edward1 joined #gluster
12:21 pithagorians left #gluster
12:21 pithagorians joined #gluster
12:23 davdunc joined #gluster
12:23 davdunc joined #gluster
12:24 designbybeck joined #gluster
12:25 tryggvil joined #gluster
12:29 bauruine joined #gluster
12:33 inodb_ joined #gluster
12:37 andreask joined #gluster
12:51 trapni joined #gluster
12:53 jeffrin joined #gluster
12:55 jeffrin hello all
12:55 jeffrin happy hacking !
12:59 inodb_ joined #gluster
13:03 duerF joined #gluster
13:05 ally1 joined #gluster
13:18 quillo joined #gluster
13:18 shireesh joined #gluster
13:22 tjikkun_work joined #gluster
13:27 toruonu joined #gluster
13:30 webwurst joined #gluster
13:33 webwurst hi! what happens if i add a non-empty directory as a brick to a volume? will it be merged? or does it break?
13:33 inodb_ joined #gluster
13:37 toruonu a quick question on volume management… let's assume I've made a volume of 12 bricks over 6 nodes (2 per node) with replication factor 3 and I want to now to reduce the volume to 3 servers 2 disks each and let's assume there's enough free space to do it (utilization <50%). Can I somehow get glusterfs to migrate the data around, free up the relevant bricks and disconnect them?
13:39 jdarcy toruonu: I think that in 3.3+ you can change the replication level as part of an add-brick or remove-brick command.
13:40 toruonu but if I don't want to change replication, but instead just remove one chunk
13:41 toruonu say let's take a simpler example, no replication just 8 bricks in the volume and I now want to remove 2 bricks (there's plenty of free space)
13:42 toruonu I'd like to be able to add/remove servers/bricks if needed without having to rebuild the volume
13:42 jdarcy toruonu: Then that's even easier.  You could do one or more remove-brick commands to remove entire replica sets, rebalancing onto the remainder, then one or more replace-brick commands to consolidate those onto the servers you want.
13:43 toruonu as I understood remove-brick command removes the brick and the data on it is lost from the volume (it still remains in the brick though)
13:43 toruonu that's not what I need, I need the command to keep the brick in volume as long as it rebalances
13:43 toruonu i.e. can I rebalance so that a brick is freed :)
13:43 toruonu and then remove the brick
13:43 jdarcy That's not how I think it's supposed to work, and I find it hard to imagine anyone would find it useful.
13:43 bennyturns joined #gluster
13:44 toruonu it's kind of expandfs, reducefs stuff
13:44 toruonu if I want to overall reduce volume size
13:44 toruonu if I just want to fix something yes I can remove the brick and replace the brick and rebalance yes (if it's got replication factor so that the files are recovered)
13:45 toruonu but if I have say 100 servers and I know I have to take 20 down for reinstalling then I'd like to move the data to other nodes (assuming there is free space)
13:45 toruonu this way keeping the data fully available throughout the upgrade cycle (may not be related to glusterfs itself)
13:46 TheHaven joined #gluster
13:48 inodb^ joined #gluster
13:48 jdarcy I just ran a quick test - create 1000 files in a ten-brick volume, remove a brick, see how many files I have.  Still 1000, consistent with remove-brick doing an implicit "rebalance away" as I'd thought.
13:49 jdarcy The "remove-brick . . . status" output seems utterly useless, though, and the "could lose data" warning seems scarier than it should be.
13:51 jdarcy If you want to have some fun, you could do a remove-brick on a brick holding some large files, and watch them get moved elsewhere.
13:52 toruonu ok, if the remove brick rebalances the data away, then that is a good way it should work. However that means this can take time (if we use 3TB drives it'll take a while) >(
13:52 jdarcy Indeed.
13:52 toruonu well as long as it works it's fine :D
13:52 toruonu it also takes time on hadoop, the good part is that you can set an arbitrary number of datanodes to draining and it'll continue running as long as it takes
13:53 toruonu can you do the same? can you execute remove brick on 20 bricks at the same time and it frees them all?
13:55 H__ afaik yes
13:55 jdarcy AFAIK yes.  In some cases (e.g. changing replication level) it's the only way to do it.
13:57 toruonu ok, great :)
13:57 toruonu that has removed one of the last question marks that I had
13:58 toruonu from now on I'll start testing it, initially at small scale and possibly in the future we can try replacing hdfs itself
13:58 toruonu that'd mean a 200 node 2PB installation
13:59 trapni left #gluster
14:00 kleind joined #gluster
14:01 tqrst I'm trying to remove a very old folder from my mounted gluster volume, but rm -rf is giving "Invalid argument" and "Stale NFS handle". The gluster logs show  "W [fuse-bridge.c:911:fuse_unlink_cbk] 0-glusterfs-fuse: 1561: UNLINK() [the file path]  => -1 (Invalid argument)". Any ideas on how to get rid of these?
14:02 * m0zes has had luck with 'mount -o remount $mountpoint' in those cases.
14:02 tqrst to make things even weirder: I can go in and rm the files individually
14:02 tqrst just not with rm -rf
14:03 tqrst joined #gluster
14:03 tqrst erm, irssi just segfaulted
14:03 tqrst oops?
14:03 tqrst anyhow, I already tried to remount
14:03 tqrst lost anything after what m0zes said
14:03 tqrst why would a recursive rm fail when going one by one works?
14:06 tqrst actually, it seems to be arbitrary
14:06 tqrst if I retry a few times, some get deleted
14:07 tqrst e.g. http://pastie.org/private/eir7r87f9b3imhqx9gzwya
14:07 glusterbot <http://goo.gl/WUCFn> (at pastie.org)
14:07 ctrianta joined #gluster
14:08 tqrst I'm also seeing some "gfid different on data file on [...]"", and "multiple subvolumes have file [...] (preferrably (sic) rename the file in the ackend, and do a fresh lookup)"
14:14 tqrst at this rate I may as well just write a loop that tries to rm until it works, but I'd like to know why this is happening
14:16 ndevos tqrst: if you see "gfid different on data file on [...]" it suggests that you run into a ,,(split brain)
14:16 glusterbot tqrst: I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
14:16 ndevos ,,(split-brain)
14:16 tqrst ,,(split-brain)
14:17 glusterbot (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
14:17 glusterbot (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
14:18 tqrst I like how semiosis' question starts with "I am new to glusterfs"
14:20 shireesh joined #gluster
14:21 hagarth joined #gluster
14:23 rwheeler joined #gluster
14:23 tqrst ndevos: (thanks, that seems to be it)
14:24 ndevos tqrst: okay, good :)
14:25 tqrst side note: any news on efforts to minimize downtime during gluster upgrades? 3.2 to 3.3 requires downtime again, but I recall someone here mentioning that the devs were trying to get something a bit more seamless in the future.
14:28 tqrst (side question: is there any way to do 3.2->3.3 without any downtime, even if it's complicated?)
14:33 _Bryan_ Anyone have link to the Gluster 3.2.5 Admin Guide PDF download?
14:33 tqrst side side question: what happens if a 3.2 client connects to 3.3? Will my volume burst into flames? Will it simply fail?
14:46 robo joined #gluster
14:47 overclk joined #gluster
14:47 n8whnp_ joined #gluster
14:48 n8whnp joined #gluster
14:49 dbruhn joined #gluster
14:50 saz_ joined #gluster
14:53 balunasj joined #gluster
14:55 mohankumar joined #gluster
15:08 jeffrin left #gluster
15:08 PatH_ joined #gluster
15:09 chirino joined #gluster
15:09 PatH_ left #gluster
15:15 stopbit joined #gluster
15:24 wushudoin joined #gluster
15:29 jbrooks joined #gluster
15:35 Technicool joined #gluster
15:41 TheHaven joined #gluster
15:43 ctrianta joined #gluster
15:45 tqrst In http://download.gluster.org/pub/glu​ster/glusterfs/3.3/3.3.1/EPEL.repo/, I see a epel-6 and epel-7, but no epel-6.1. Is it safe to use epel-6 with CentOS 6.1?
15:45 glusterbot <http://goo.gl/h16pY> (at download.gluster.org)
15:46 tqrst I'm asking this because the glusterfs-epel.repo file looks for http://download.gluster.org/pub/gluster/glusterf​s/3.3/3.3.1/EPEL.repo/epel-$releasever/$basearch, which doesn't exist for me since $releasever = 6.1.
15:46 glusterbot <http://goo.gl/lYSE8> (at download.gluster.org)
15:48 overclk joined #gluster
15:51 johnmark tqrst: oh, interesting
15:52 johnmark I don't think we broke it into releases for 6.x
15:52 johnmark kkeithley_wfh: ping ^^^
15:52 johnmark tqrst: it's kkeithley_wfh's repo. he would know
15:53 tqrst johnmark: thanks
15:53 kaney777 joined #gluster
15:54 tqrst I'm not sure what we did for the previous updates
15:54 tqrst I think we downloaded by hand, and it was less ambiguous
15:54 kaney777 left #gluster
15:56 tqrst or used whatever centos has for gluster in its official epel, which is stuck at 3.2.7 for now
15:56 madphoenix joined #gluster
15:58 madphoenix I'm hoping to set up object access to my existing gluster volume, but I'm confused a bit confused from the documentation.  Does the swift object server need to be installed on each brick, or can I simply create a separate object server that mounts my volume using the native glusterfs client?
15:59 johnmark madphoenix: the latter
15:59 johnmark you don't have to install it on every brick
16:00 madphoenix johnmark: excellent.  so really the simplest implementation is to just put the proxy/account/container/object services all on one box that has my gluster volume mounted?
16:02 johnmark madphoenix: exactly
16:17 ramkrsna joined #gluster
16:20 ika2810 joined #gluster
16:23 dbruhn Is there any new function for correcting split brain in 3.3.1
16:24 Shdwdrgn joined #gluster
16:33 nueces joined #gluster
16:33 dbruhn Also, is there a reason you would want to do a fix layout rebalance before fixing the layout and migrating?
16:35 theron joined #gluster
16:39 kkeithley_wfh tqrst: (sorry, I was on a conf. call) epel-6 applies to all 6.x versions of RHEL/CentOS. They aren't broken out anywhere that I've ever seen by 6.0, 6.1, 6.2, etc.
16:39 semiosis tqrst: zen mind, beginner's mind
16:41 kkeithley_wfh You might also want to use my fedorapeople.org repo too. If there are any critical bug fixes, that's where updated RPMs will be. The repo on download.gluster.org will only ever have 3.3.1-1 RPMs.
16:41 kkeithley_wfh @repos
16:41 glusterbot kkeithley_wfh: I do not know about 'repos', but I do know about these similar topics: 'repository', 'yum repository'
16:41 kkeithley_wfh @yum repository
16:41 glusterbot kkeithley_wfh: Joe Julian's repo with compiler optimizations (-O2) enabled having 3.1 and 3.2 versions for both x86_64 and i686 are at http://joejulian.name/yumrepo/
16:41 kkeithley_wfh @yum3.3
16:41 glusterbot kkeithley_wfh: I do not know about 'yum3.3', but I do know about these similar topics: 'yum3.3 repo'
16:41 kkeithley_wfh @yum3.3 repo
16:41 glusterbot kkeithley_wfh: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
16:42 kkeithley_wfh the ever intuitive glusterbot
16:45 kkeithley_wfh tqrst: and BTW, the official EPEL repo, (courtesy of Fedora, not CentOS) will only ever have 3.2.7 and HekaFS for EPEL-6. HekaFS isn't compatible with 3.3.x and jdarcy and I would prefer not to break HekaFS. But if you're not using HekaFS for anything, by all means use my fedorapeople.org repo for 3.3.x.
16:47 copec joined #gluster
16:48 lkoranda joined #gluster
16:48 kkeithley_wfh madphoenix: just be aware that UFO (Swift) will mount the gluster volume itself, independent of any mounts you do by hand.
16:51 jon__ joined #gluster
16:51 webwurst left #gluster
16:52 jrsharp joined #gluster
16:53 jrsharp hey all
16:53 madphoenix kkeithley_wfh: so it will automatically mount whatever is listed for "devices" in the object server config?
16:54 jrsharp New to cluster…  Having trouble with elastic IPs on an EC2 deployment…  getting "Not a friend" messages.
16:54 jrsharp gluster
16:55 semiosis jrsharp: you need to probe your ,,(hostnames) before you can make a volume
16:55 glusterbot jrsharp: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
16:55 semiosis ,,(rtfm)
16:55 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
16:55 jrsharp indeed… I've been working from the quickstart
16:56 jrsharp no, "Getting Started"
16:56 semiosis @quickstart
16:56 glusterbot semiosis: I do not know about 'quickstart', but I do know about these similar topics: 'quick start'
16:56 semiosis @quick start
16:56 glusterbot semiosis: http://goo.gl/CDqQY
16:56 semiosis jrsharp: getting started... link?
16:56 jrsharp and it was this document that advocated the use of Elastic IPs in the first place
16:56 jrsharp http://gluster.org/community/documentat​ion/index.php/Getting_started_overview
16:56 glusterbot <http://goo.gl/1Y1lU> (at gluster.org)
16:56 semiosis glusterbot: meh
16:56 glusterbot semiosis: I'm not happy about it either
16:57 jrsharp after probing for peers, I try to create the volume and I get the "not a friend" message about the machine on which I issue the command
16:57 jrsharp (I'm attempting a two-peer mirror volume)
16:58 semiosis apparently the editor of that guide didn't think my contribution was helpful :(
16:58 jrsharp I have the /etc/hosts filled with the two elastic ips
16:58 jrsharp well, he does say, "Many thanks to Semiosis for his detailed feedback on this."
16:58 jrsharp ;)
16:58 jrsharp he/she
16:59 semiosis oh?  i missed that part :)
16:59 jrsharp http://gluster.org/community/documentat​ion/index.php/Getting_started_setup_aws
16:59 glusterbot <http://goo.gl/11xFP> (at gluster.org)
17:00 semiosis trying to activate my wiki powers but idk what my creds are
17:00 jrsharp when I don't attempt to use the elastic IPs, and use the internal IPs instead, I can create the volume fine
17:00 semiosis yeah, thats worse than elastic ips
17:00 jrsharp but I agree with the advice to use elastic
17:00 jrsharp yeah
17:01 semiosis i recommend using fqdn/cnames to point to the public-hostname of your instances
17:01 semiosis rather than elastic ips
17:01 jrsharp I don't want to move forward without resolving this
17:01 semiosis using the local-ipv4 is just plain bad
17:03 jrsharp yeah
17:03 jrsharp it seems to me that glusterd doesn't know that it "possesses" or "is" the elastic ip
17:03 semiosis http://gluster.org/community/documentation​/index.php/Talk:Getting_started_setup_aws
17:03 glusterbot <http://goo.gl/y9qHi> (at gluster.org)
17:03 kkeithley_wfh madphoenix: devices are presently ignored. account is synonymous with volume.  It will mount the volume for the account.
17:04 semiosis jrsharp: i added my rant on the talk page
17:04 jrsharp awesome
17:04 jrsharp thx
17:04 semiosis jrsharp: hope that helps, feedback would be appreciated
17:04 jrsharp I have FQDN set up
17:04 jrsharp I had attempted that alone before modifying /etc/hosts
17:05 semiosis jrsharp: i also map the fqdn of each machine to 127.0.0.1 in its hosts file
17:05 jrsharp ah
17:05 kkeithley_wfh this if you have a user defined in your /etc/swift/proxy-server.conf such as user_gv_joe = joespw and gv is the name of your gluster volume, then it'll be mounted at /mnt/gluster-volumes/AUTH_gv
17:05 kkeithley_wfh ^this^thus
17:05 kkeithley_wfh s/this/thus/
17:05 glusterbot What kkeithley_wfh meant to say was: ^thus^thus
17:05 kkeithley_wfh thus  if you have a user defined in your /etc/swift/proxy-server.conf such as user_gv_joe = joespw and gv is the name of your gluster volume, then it'll be mounted at /mnt/gluster-volumes/AUTH_gv
17:06 raghu joined #gluster
17:06 madphoenix kkeithley_wfh: is there any documentation that goes into greater detail on this?  I'm working from the 3.3 admin guide from gluster.org and it  really doesn't go into any of this
17:07 kkeithley_wfh I agree, the docs aren't great. There's some better docs for Swift in general on the openstack web site.
17:10 kkeithley_wfh But the real point I was trying to make was that there's little point in mounting the gluster volume on your swift server, except perhaps to be sure that it can be mounted. UFO (Swift) will mount it for you.
17:11 inodb_ joined #gluster
17:13 Mo__ joined #gluster
17:14 Bullardo joined #gluster
17:15 Norky joined #gluster
17:16 sshaaf joined #gluster
17:17 jrsharp semiosis: mapping back to 127.0.0.1 as well fixed it
17:17 jrsharp thanks!
17:17 toruonu eh I have also the repo version issue
17:18 toruonu $releasever seems to default to 5.7 on Scientific Linux 5.7 (based on RHEL 5.7)
17:18 toruonu I guess I'll have to change it manually and distribute the repo file
17:19 Norky joined #gluster
17:20 semiosis jrsharp: yw!  i'll add a note about that to the wiki
17:20 semiosis thank you
17:20 jrsharp thanks!
17:21 jrsharp now to work out how best to set up my AMIs to organize these volumes
17:22 semiosis fwiw the old ,,(semiosis tutorial) may be helpful
17:22 glusterbot http://goo.gl/6lcEX
17:22 jrsharp user data, instance data, etc.
17:22 jrsharp nice, thanks
17:22 semiosis there's also some ,,(puppet) modules you may find useful
17:22 glusterbot (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
17:25 toruonu erm … I'm just following the instructions from the quickstart and it seems glusterfs package does not install the startup script
17:26 shireesh joined #gluster
17:26 toruonu bah
17:26 toruonu it seems the command that was given is crap :p it didn't install glusterfs-server :)
17:26 semiosis command given where?
17:26 toruonu http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
17:26 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
17:27 toruonu yum install glusterfs{,fuse,server}
17:27 toruonu yum install glusterfs{,fuse,server}
17:27 toruonu should be yum install glusterfs-server glusterfs -y
17:27 toruonu that pulls in the fuse automatically … don't know if server installation pulls in glusterfs automatically too
17:29 toruonu to set up the trusted pool is it enough that one node pings all the others or does it have to be a mesh?
17:30 toruonu ok, only one needed :)
17:36 nueces joined #gluster
17:37 toruonu what's the recommended way to guarantee glusterfs gets mounted in fstab when you have multiple nodes that can export it
17:38 toruonu so that when the node that's configured by default isn't responding it'd go to the second, the third etc
17:38 semiosis @rrdns
17:38 glusterbot semiosis: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
17:38 semiosis toruonu: ^^
17:39 spostma joined #gluster
17:39 semiosis toruonu: also see ,,(hostnames)
17:39 glusterbot toruonu: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
17:39 semiosis note probing back to the first from one of the others, to update ip to hostname
17:40 spostma I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed.  I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. I was able to overcome those issues and mount the export on my node. Thanks to all for your help.  However, I can only view the portion of files that is directly stored on the one brick in the cluster. The other bricks do not seem to be replicating, tho gluster reports the
17:40 spostma [root@mseas-data ~]# gluster volume info Volume Name: gdata Type: Distribute Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: gluster-0-0:/mseas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data
17:41 spostma The brick we are attaching to has this in the fstab file. /dev/mapper/the_raid-lv_data /data              xfs     quota,noauto    1 0   but "mount -a" does not appear to do anything. I have to run "mount -t xfs  /dev/mapper/the_raid-lv_data /data" manually to mount it.    Any help with troubleshooting why we are only seeing data from 1 brick of 3 would be appreciated, Thanks, Steve Postma
17:42 jdarcy Interesting.  https://build.opensuse.org/package/show?packag​e=kvm-stack-1&amp;project=home%3Assullivan1986
17:42 glusterbot <http://goo.gl/hHPpK> (at build.opensuse.org)
17:43 jdarcy That's the stack behind Storm On Demand's new High Performance Block Storage offering.  Ceph-based.  :(
17:43 jdarcy Also Suse-based.  :( x10
17:43 toruonu btw, what is meant with this in the admin guide:
17:43 toruonu In this release, configuration of this volume type is supported only for Map Reduce workloads.
17:43 toruonu it's about Distributed striped replicated volumes
17:44 toruonu we have similar workloads like map reduce, but not using map reduce
17:44 jdarcy I have no idea why the doc would say that.
17:45 jdarcy I wouldn't necessarily recommend stripe for much of anything, but I wouldn't support it any less for one workload than another.
17:46 purpleidea joined #gluster
17:46 purpleidea joined #gluster
17:46 toruonu why wouldn't you recommend it?
17:46 pdurbin @lucky storm on demand
17:46 toruonu isn't the speed limited in just replicated volumes by single disk performance for writes and by number of replicas for reads?
17:46 glusterbot pdurbin: http://www.stormondemand.com/
17:47 jdarcy Because it very rarely does any good.  The overhead from splitting and recombining requests usually overwhelms any benefit from parallelism, plus it makes you vulnerable to XFS preallocation nastiness etc.
17:47 toruonu that's assuming reading is round robined
17:47 toruonu I don't use XFS
17:47 toruonu it has bitten us too many times
17:47 toruonu with data loss
17:47 semiosis toruonu: how long ago was that?
17:47 toruonu Ext4 is the FS we use
17:47 toruonu 1-2 weeks
17:47 toruonu quite latest kernels
17:47 semiosis eek!
17:47 toruonu ok 1 month :)
17:47 jdarcy You're using ext4 because you're concerned about data loss?  Sorry, but I have to laugh at that one.
17:47 pdurbin toruonu: data loss with XFS? :(
17:48 toruonu we had power issues in DC
17:48 semiosis well fyi there's an issue with glusterfs over ,,(ext4) on the newest kernels and also redhat kernels which backported the new code
17:48 glusterbot Read about the ext4 problem at http://goo.gl/PEBQU
17:48 toruonu lost nodes 1-2x a day for a few weeks until the grid was stabilized fully
17:48 PatH_ joined #gluster
17:48 toruonu every sudden restart caused data loss
17:48 toruonu and I'm not talking small loss
17:48 toruonu we lost ca 100TB
17:48 toruonu total
17:48 toruonu over the timeframe
17:49 pdurbin :(
17:49 toruonu that on hadoop datanodes that had every block replicated x2
17:49 Mo_ joined #gluster
17:49 jdarcy Are you sure it was XFS and not the configuration of caching on the underlying RAID controllers or disks?  Is there a bug report where the analysis occurred?
17:49 toruonu we only lost data when the block or its metadata was corrupted on both locations
17:49 toruonu no raid, raw disks
17:49 toruonu hadoop did the replications
17:49 toruonu but the loss amount was so huge
17:49 toruonu tens of thousands of files every time
17:50 toruonu that we hit the multiple loss so often
17:50 toruonu that even replication x2 didn't help us
17:50 semiosis toruonu: using glusterfs then too?
17:50 toruonu we weren't
17:50 toruonu we're starting now for a shared /home
17:50 semiosis oh, we always like to hear how glusterfs is being used, esp. in "big" setups
17:50 toruonu right now the big setup is hadoop based
17:50 toruonu 2PB, 200 nodes
17:50 * jdarcy would also wonder whether HDFS is being honest about whether it flushed data so the filesystem could even see it.
17:51 toruonu jdarcy: it's the 0 size file issue
17:51 toruonu and most files were months old
17:51 toruonu so not recent writes
17:51 toruonu it's the XFS corruption of namespace or what not
17:51 toruonu that screws up a large chunk of the FS
17:51 toruonu it was supposed to be fixed in the kernels we use
17:51 toruonu but guess not
17:52 toruonu anyway … we decided that XFS was not for us … everything's on ext4 now
17:52 rudimeyer joined #gluster
17:52 toruonu and the /home that I'm currently planning for glusterfs I'm setting replication factor 3
17:52 toruonu just contemplating if striping improves access
17:52 toruonu Hadoop we've set up with 128MB block size, most of our files are 2GB so 16 blocks distributing nicely
17:53 jdarcy You can give it a shot, certainly.  I just wouldn't be too optimistic.
17:54 semiosis imho, don't ,,(stripe)
17:54 toruonu well I was contemplating running a few test setups now that I've got a 10TB volume to play with
17:54 glusterbot Please see http://goo.gl/5ohqd about stripe volumes.
17:55 * jdarcy saves this log for the next time one of the kernel weenies gets all condescending about distributed filesystems.
17:56 toruonu :)
17:57 jdarcy I might have to wait an hour or two.
17:57 * toruonu lol's
17:58 toruonu the kernel we use: 2.6.32-042stab059.7 from OpenVZ that is based on 2.6.32-279.1.1.el6 with OpenVZ patches
17:59 dbruhn can anyone give me an idea of what I need to do to correct these self heal issues.
17:59 dbruhn http://pastebin.com/GPz3fgf2
17:59 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
17:59 toruonu it's not the latest, but it's relatively new all things considered and I doubt the XFS known corruption thing was newer than that :)
18:00 jdarcy dbruhn: So you have two copies of some files, on different bricks, with different GFIDs?
18:01 jdarcy dbruhn: Had you experienced any server/brick failures?
18:01 dbruhn They I had one brick go down on me a while back
18:01 dbruhn it was down for a couple hours
18:02 dbruhn it's weird because I am not getting any listings in the split-brain info
18:02 jdarcy dbruhn: Is it possible that these files were deleted and then recreated during that time?
18:02 dbruhn anythings possible
18:02 jdarcy Unfortunately, GFID mismatch is not counted as split brain.  I sort of think it should be.
18:02 semiosis dbruhn: also what version of glusterfs?
18:02 dbruhn I am running 3.3.1
18:03 dbruhn but these errors might be from before I upgraded from 3.3.0
18:03 jdarcy dbruhn: Well, if that's what happened, then it's a situation GlusterFS can't reconcile by itself safely.  You'd have to examine the files yourself, choose which one you like, and delete the other(s).
18:04 jdarcy The instructions at http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ might also apply, even though it's not technically split-brain.
18:04 glusterbot <http://goo.gl/FPFUX> (at joejulian.name)
18:09 dalekurt joined #gluster
18:11 dbruhn Awesome, thanks
18:12 dbruhn I was wondering about treating it as split-brain, but I wanted to make sure before I did something silly
18:16 inodb^ joined #gluster
18:17 dbruhn Another odd thing, I have a bunch of grid errors without files listed after them
18:17 pithagorians joined #gluster
18:17 dbruhn any idea how to determine what they are referencing
18:20 tqrst kkeithley_wfh: thanks; btw the .1 in the 6.1 was probably added by scientificlinux (which is based on centos)
18:25 zaitcev joined #gluster
18:26 toruonu btw can I assume that glusterfs startup scripts will autostart volumes? so if there is a generic failure and the whole site goes down and then nodes come back one by one then the first one back will bring the volume online and the health will improve as nodes with bricks come online?
18:37 dbruhn how can you get an output of which nodes are replicants of the other nodes
18:39 DaveS_ joined #gluster
18:43 kminooie joined #gluster
18:45 kminooie joeJulian: hi, you there?
18:46 toruonu anyone know if there is an rsync option that does multiple files in parallel?
18:46 toruonu as i have a volume which is made up of 12 bricks on 6 servers with replication factor 3 it means the best result should come from using 4 parallel files I guess
18:47 toruonu ah, idiot … the files are being read from a single disk mirror so probably reading beyond 2 parallel gives close to 0 benefit :D
18:47 toruonu or actually a negative impact
18:48 toruonu ok, started the usual rsync -av /src /dest
18:49 kminooie toruonu: rsync program works with point a to point b situation, would not understand that something exist in multiple location , you might be able to utilize geo replication thou
18:50 toruonu nah I want to move a servers home directory of ca 400GB to the glusterfs to see how it works
18:50 toruonu running the first rsync, then will run iterative ones and then will ask users to stop using for a while and move over
18:51 kminooie ok but what do you mean by "first rsync"
18:51 dbruhn toruonu: rsync isn't aware of the data so you can't parallelize it like that. You could set up individual jobs for subdirectories.
18:52 dbruhn I have done some testing, and running multiple concurrent jobs with rsync like that speeds the transfers up considerably
18:52 toruonu nah I decided that it probably provides too little improvement in this case so just run a simple one
18:52 dbruhn the biggest slow down is due to it traversing the directories
18:52 toruonu the question is wether this would improve as the source is the bottleneck here I think
18:52 toruonu hmm
18:52 toruonu point
18:52 semiosis toruonu: i recommend using rsync options --whole-file and --inplace when rsyncing into glusterfs
18:53 toruonu well the first rsync will be whole file anyway because the glusterfs will be empty
18:53 toruonu for the next iterations it's a good hint
18:53 semiosis --inplace
18:53 semiosis highly recommended
18:54 semiosis rename(2) should be avoided on distributed volumes whenever possible
18:54 semiosis imho
18:59 dbruhn From this http://fpaste.org/Ev4i/ is there anyway to tell which servers are the replicants of which?
18:59 glusterbot Title: Viewing Paste #253775 (at fpaste.org)
18:59 semiosis ~brick naming | dbruhn
19:00 glusterbot dbruhn: http://goo.gl/l3iIj
19:00 semiosis bricks are grouped into replica sets of replica-factor size in the order they appear in the brick list
19:00 semiosis which in your case means 1-3 are replicas, and 4-6 are replicas
19:01 dbruhn ok, i added node 5 and 6 after the fact
19:01 dbruhn would that effect the replication
19:02 semiosis oh ha, i read that wrong
19:02 semiosis 3x2, you have three replica-pairs
19:02 semiosis so 1-2, 3-4, and 5-6
19:02 dbruhn I assumed 1 was going to 2, 3 to 4, and 5 to 6
19:02 semiosis you're right
19:03 dbruhn awesome, where does one make a feature suggestion for detailing that more clearly.
19:03 semiosis more clearly than that?!?!
19:03 semiosis j/k
19:03 semiosis you can file a bug for enhancement requests
19:03 glusterbot http://goo.gl/UUuCq
19:05 GLHMarmot joined #gluster
19:09 kminooie left #gluster
19:16 wushudoin| joined #gluster
19:20 greylurk joined #gluster
19:43 Daxxial_ joined #gluster
19:45 y4m4 joined #gluster
19:52 hattenator joined #gluster
19:52 greylurk I've got an odd issue with mounting a gluster 3.2 volume..
19:53 rwheeler joined #gluster
19:54 greylurk When I run try to mount the volume, glusterfs daemonizes and runs up to 100% usage on a single core, and sits there as long as I will let it.
19:54 semiosis what distro?
19:55 greylurk Ubuntu 8.04.
19:56 greylurk I'm trying to replace these boxes with some newer distros, but thats a big ball of twine, and I'm unraveling it one piece at a time.
19:57 greylurk I've tried running glusterfs in debug mode: "glusterfs --volfile-id=/gv0 --volfile-server=gluster01 /mnt/gluster --debug"
19:58 greylurk It gets the volfile, and then nothing.
19:59 semiosis any reason you're not using the mount.glusterfs script?
19:59 semiosis mount -t glusterfs
20:00 greylurk That's what I tried the first time, where it ran up to 100% usage, but in the background.  Can I use that with the —debug option?
20:00 semiosis hmm, idk
20:00 semiosis could you pastie your client log file?  usually /var/log/glusterfs/client-mount-point.log
20:05 greylurk It's not there… Searching for it.  In the meantime, here's what my glusterfs —debug shows: http://pastebin.com/vcnFgynt
20:05 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
20:05 greylurk Sorry, http://dpaste.org/C0oKx/
20:05 glusterbot Title: dpaste.de: Snippet #213478 (at dpaste.org)
20:07 semiosis odd, normally the volfile is at the top of the log not the bottom
20:08 semiosis do your brick logs show connections from this client?  pastie one of those?
20:18 greylurk Doesn't look like the brick logs are getting any connections.
20:18 greylurk You're looking for the /var/log/glusterfs/brick/*.log files right?
20:19 semiosis yep
20:19 greylurk Yeah, no new entries when I try to mount the drive.
20:20 blendedbychris joined #gluster
20:20 blendedbychris joined #gluster
20:20 greylurk The last entries are over 2 hours ago
20:21 greylurk Maybe I should just use nfs mounting until I can migrate them to a newer distro.
20:21 greylurk The Ubuntu 12.04 boxes are mounting the drive no problem.
20:24 semiosis ,,(nfs)
20:24 glusterbot To mount via nfs, most distros require the options, tcp,vers=3 -- Also portmapper should be running on the server, and the kernel nfs server (nfsd) should be disabled
20:24 semiosis oh hey, do you have fuse loaded on the 8.04 boxes?
20:25 semiosis i'm pretty sure it's builtin to newer ubuntu kernels, but idk about 8.04, maybe a module, maybe not even loaded by default
20:26 greylurk $ lsmod | grep fuse
20:26 greylurk fuse                   63152  3
20:26 greylurk Looks like it's there and installed.
20:29 gbrand_ joined #gluster
20:33 greylurk Yeah, NFS is working.  If I understand correctly though, that's not as robust though, because if the server I NFS to goes down, it won't fail over to any other bricks right?
20:33 semiosis yep
20:33 greylurk And it routes all of the read/write operations to that single instance.
20:34 semiosis well i'd say routes "through" not necessarily "to"
20:34 greylurk Hrm,  Well, it's a temporary measure until I get these migrated to 12.04.  I guess I'll just deal with it.
20:40 neofob joined #gluster
20:56 hagarth joined #gluster
21:06 chirino joined #gluster
21:06 chirino if cluster.quorum-type is set to auto, is it still possible to get into a split brain scenario?
21:13 madphoenix kkeithley_wfh: you said swift will mount the volume for you based on the user string - does that mean that it has to be installed on one of the bricks?  otherwise how do you tell it the address of the volume?
21:16 TSM2 joined #gluster
21:17 Bullardo joined #gluster
21:20 JoeJulian chirino: Yes, it's call "split brain in time" where the quorum checks pass at any moment in time, but are split at different times, thus causing inconsistency that cannot be resolved. I think that's covered in #1 ,,(split-brain)
21:20 glusterbot chirino: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
21:20 glusterbot New news from newglusterbugs: [Bug 878652] Enchancement: Replication Information in gluster volume info <http://goo.gl/dWQnM>
21:21 JoeJulian madphoenix: The glusterfs client does need to be installed on the ufo(swift) server, yes.
21:22 chirino JoeJulian: thx!
21:25 chirino JoeJulian: seems to me that  http://goo.gl/nywzC does not cover your 'split at different times' case.
21:25 glusterbot Title: Question: What is split-brain in glusterfs, and how can I cause it? (at goo.gl)
21:27 chirino I think you might need a new 'What is split-brain in glusterfs, and how can I cause it with  cluster.quorum-type set to auto'
21:27 semiosis chirino: see "alternating server failures"
21:27 semiosis chirino: that's a great suggestion
21:27 chirino in the alternating failure case when you loose 1 of the bricks you should loose quorum
21:27 madphoenix JoeJulian: right, but does UFO need to be installed on one of the bricks?  kkeithley seemed to be saying that the volume would be auto-mounted based on username in the proxy config file, but obviously the hostname of one of the bricks must also be specified somewhere?
21:27 chirino thus the volume should go offline
21:28 kkeithley_wfh madphoenix: it's in a config file, but off the top of my head I'm not sure which one. give me a sec, let me see if I can find it.
21:30 madphoenix thx
21:31 madphoenix kkeithley_wfh: fs.conf?  The "remote_cluster" attribute?
21:31 kkeithley_wfh and mount_ip, probably change 'mount_ip = localhost' to the ip of one of your gluster nodes
21:31 madphoenix cool i'll give it a whirl, thanks
21:32 kkeithley_wfh yw
21:32 chirino ok. posted the question: http://community.gluster.org/q/how-can​-i-cause-split-brain-in-glusterfs-when​-cluster-quorum-type-is-set-to-auto/
21:32 glusterbot <http://goo.gl/5RvKA> (at community.gluster.org)
21:35 y4m4 joined #gluster
21:37 y4m4 joined #gluster
21:38 greylurk joined #gluster
21:38 greylurk left #gluster
21:38 * JoeJulian has a warm place in his heart for chirino who is able to use a Q&A forum correctly. :D
21:40 GLHMarmot joined #gluster
21:40 chirino JoeJulian :)
21:41 chirino JoeJulian: If clients are allowed to write to a brick that was just restarted before it could be healed, then I kinda have an idea of how a split brain scenario could be developed.
21:44 madphoenix when i try to execute "swift-init main start", all four services fail immediately with socket.error no such file or directory.  i've ensured that there is nothing else bound to those ports, any idea what else could be going on?
21:50 glusterbot New news from newglusterbugs: [Bug 878663] mount_ip and remote_cluster in fs.conf are redundant <http://goo.gl/p6MZ7>
22:02 lh joined #gluster
22:02 lh joined #gluster
22:14 helloadam joined #gluster
22:14 Technicool joined #gluster
22:14 mnaser joined #gluster
22:15 snarkyboojum joined #gluster
22:15 lh joined #gluster
22:15 Bullardo joined #gluster
22:16 madphoenix ah ... the swift services will fail to start if rsyslog isn't running
22:18 JoeJulian good to know, thanks.
22:19 abyss__ joined #gluster
22:20 purpleid1a joined #gluster
22:31 jbrooks joined #gluster
22:45 UnixDev joined #gluster
22:46 elyograg joined #gluster
22:49 dbruhn__ joined #gluster
23:18 dalekurt joined #gluster
23:36 xymox_ joined #gluster
23:38 inodb_ joined #gluster
23:39 inodb^ joined #gluster
23:43 xymox joined #gluster
23:48 xymox joined #gluster
23:52 inodb_ joined #gluster
23:59 kminooie joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary