Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 jag3773 joined #gluster
00:13 _zerick_ joined #gluster
00:16 mjrosenb JoeJulian: that just adjusts the numbers, not initiate a rebalance, right?
00:21 B21956 joined #gluster
00:33 andreask joined #gluster
01:10 B21956 left #gluster
01:11 harish joined #gluster
01:15 bogen1 http://fpaste.org/47347/38197240/ every time I try to mount a volume now it fails. I upgraded to 3.4.1. (I reinstalled the VMs and installed 3.4.1). I reformated the bricks and any new volume I create won't mount.
01:15 glusterbot Title: #47347 Fedora Project Pastebin (at fpaste.org)
01:17 bogen1 all the bricks are mounted, service started, volumes are created without any trouble.
01:20 bogen1 volume I'm trying to mount is started
01:28 bogen1 I even create a volume with 1 brick
01:28 bogen1 it always fails to mount
01:40 dbruhn joined #gluster
01:41 dbruhn anyone around to help me make sense of something?
01:41 dbruhn RDMA related? :/
01:41 vynt joined #gluster
01:45 jbrooks joined #gluster
01:47 kevein joined #gluster
01:47 bogen1 dbruhn: I'm going to do rdma, but not yet, the oracle rdma driver conflicts with the xsigo driver
01:49 dbruhn I have RDMA volumes in production, I am having a lot of disconnects tonight?
01:49 dbruhn 3.3.1/3.3.2 are the last stable RDMA releases
01:50 dbruhn 3.4 is not functioning with RDMA yet
01:52 bogen1 hmmm
01:54 bogen1 I was trying out 3.2.7 (stock EL gluster) but was running into issues with block removals.
01:54 bogen1 So I moved to 3.4.1
01:54 bogen1 (also reinstalled the VMs and reformatted the bricks)
01:55 bogen1 now no volumes I create can be mounted (they can be started)
01:55 bogen1 I'm using oracle 6.4, I'm going to try fedora 19 tomorrow
01:55 bogen1 (in the VMs)
01:58 fidevo joined #gluster
01:59 bala joined #gluster
01:59 harish joined #gluster
02:03 kevein joined #gluster
02:05 onny joined #gluster
02:18 dbruhn bogey RDMA is broke in 3.4.1, if you go to the 3.3.2 branch it will work
02:18 dbruhn 3.2.7 was broken as well
02:18 dbruhn I have several RDMA systems
02:18 dbruhn jclift works with them pretty heavily too
02:29 bogen1 I need to use RDMA as well, just not right now. Since 3.4.x is not working for me for anything at the moment with OL (3.2.7 was working) I may give 3.3.2 a try.
02:32 dbruhn the performance increase with RDMA is amazing? I wish the development was more on top of RDMA stuff.
02:32 dbruhn or I wish I could pay for someone to actually make sure it worked in upgrades.
02:32 glusterbot New news from newglusterbugs: [Bug 1010834] No error is reported when files are in extended-attribute split-brain state. <http://goo.gl/HlfVxX>
02:32 dbruhn I should say afford
02:53 bogen1 I figured the extra performance would be great... just right now I can't use RDMA with Oracle's driver's, and I need oracle's xsigo drivers...
02:54 bogen1 well, I just tried 3.4.1 with fedora 19 on my 7 VMs... it worked out of the box... (OL 6.4 has not worked with 3.4.x for me...) GRRRR....
02:55 bharata-rao joined #gluster
02:56 bogen1 One strange thing though:  86G (huh??) That is from 2 pairs of 62G bricks.... (4 62G bricks in a replica 2)
02:56 bogen1 should htat not be 124G?
02:56 bogen1 s/htat/that/
02:56 glusterbot What bogen1 meant to say was: should that not be 124G?
02:57 bogen1 wow, that is first bot I've seen actually respond that that...
02:57 bogen1 to that...
02:57 JoeJulian mjrosenb: Yes, that just adjusts the layout in trusted.dht on the bricks. It does not move files.
03:00 mjrosenb JoeJulian: so I tried that and it didn't work.
03:00 mjrosenb JoeJulian: it worked when I removed the trusted from the name though.
03:05 kshlm joined #gluster
03:07 verdurin_ joined #gluster
03:08 JoeJulian hmm, ok
03:08 JoeJulian @targeted fix-layout
03:08 glusterbot JoeJulian: You can trigger a fix-layout for a single directory by setting the extended attribute \"trusted.distribute.fix.layout\" to any value for that directory. This is done through a fuse client mount.
03:08 JoeJulian @forget targeted fix-layout
03:08 glusterbot JoeJulian: The operation succeeded.
03:10 JoeJulian @learn targeted fix-layout as You can trigger a fix-layout for a single directory by setting the extended attribute "distribute.fix.layout" to any value for that directory. This is done through a fuse client mount. This does not move any files, just fixes the dht hash mapping.
03:10 glusterbot JoeJulian: The operation succeeded.
03:13 shubhendu joined #gluster
03:20 shylesh joined #gluster
03:21 bogen1 Alright... this wont' work for me.... Oracle Linux won't mount a Fedora 19 3.4.1 volume.... even if OL is running 3.4.1... (The issue for me seems to be that OL 3.4.x won't mount anything...)
03:22 kanagaraj joined #gluster
03:26 aravindavk joined #gluster
03:27 elyograg if (oracleLinux == asexual) lol();
03:29 JoeJulian bogen1: It appears that stat -i must not be returning 0
03:29 bogen1 elyograg: hah hah :)
03:29 bogen1 JoeJulian: ok
03:31 bogen1 JoeJulian: I'm running into an issue with F19 though, I created a volume with 2 brick pairs, and when I go to remove on of the pairs, it says one of the bricks does not exist
03:31 bogen1 even though I'm using the same brick url I used to create the volume
03:32 davinder joined #gluster
03:32 bogen1 I double chicked on the server in question, the brick is still mounted
03:33 bogen1 sudo gluster volume create pool62 replica 2 dl580G5-08-bricker00:/export/brick15/vdr.62 dl580G5-08-bricker01:/export/brick06/vdi.62 r415-01-bricker00:/export/brick03/vdf.62 r415-02-bricker00:/export/brick03/vdf.62
03:33 bogen1 that went fine and I could start the volume and it mounted fine
03:34 fidevo joined #gluster
03:34 bogen1 sudo gluster volume remove-brick r415-01-bricker00:/export/brick03/vdf.62 105.52.20.199:/export/brick03/vdf.62 commit ; Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y ; volume remove-brick commit: failed: Volume r415-01-bricker00:/export/brick03/vdf.62 does not exist
03:35 JoeJulian The key word in that error message is, "Volume"
03:35 bogen1 yeah...
03:36 JoeJulian Should I tell you, or do you enjoy the "Aha" moment?
03:36 bogen1 pasting it in here I notice that...
03:36 bogen1 noticed...
03:37 JoeJulian :)
03:37 bogen1 I know the problem now :)
03:44 dusmant joined #gluster
03:48 fidevo joined #gluster
03:49 bogen1 well, it is getting late... I need to leave the office....
03:55 jbrooks joined #gluster
03:55 itisravi joined #gluster
03:56 jbrooks @ports
03:56 glusterbot jbrooks: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
04:08 sgowda joined #gluster
04:19 spandit joined #gluster
04:21 rjoseph joined #gluster
04:22 RameshN joined #gluster
04:42 sgowda joined #gluster
04:49 bala joined #gluster
04:49 dneary joined #gluster
04:51 ppai joined #gluster
04:59 vpshastry joined #gluster
05:13 hagarth joined #gluster
05:19 Shri joined #gluster
05:26 vshankar joined #gluster
05:31 mohankumar joined #gluster
05:31 satheesh joined #gluster
05:32 nishanth joined #gluster
05:34 lalatenduM joined #gluster
05:35 satheesh joined #gluster
05:35 MrNaviPacho joined #gluster
05:37 CheRi joined #gluster
05:41 manik joined #gluster
05:46 ajha joined #gluster
05:53 psharma joined #gluster
05:54 rastar joined #gluster
06:03 kPb_in joined #gluster
06:06 mjrosenb hrmm, I'm seeing this in my logs: [2013-10-17 06:04:16.236770] D [io-threads.c:268:iot_schedule] 0-magluster-io-threads: READ scheduled as slow fop
06:06 mjrosenb i'm guessing 'slow fop' is unrelated to the fact that it seems to be taking *forever* to access a file?
06:12 shruti joined #gluster
06:13 vimal joined #gluster
06:23 rocketeer joined #gluster
06:25 rocketeer Hi all. I've just recently deployed glusterfs onto a couple of test vms to keep a wordpress directory in sync. When I put load on apache (about 10 get/s on the default page) i see gluster eating up lots of CPU.
06:25 rocketeer glusterds version 3.2.7 running on centos 6.4
06:25 jtux joined #gluster
06:26 rocketeer any thoughts on where i should start attempting to tune this?
06:26 ndarshan joined #gluster
06:26 ngoswami joined #gluster
06:27 rocketeer i am new to gluster. So any guidance anyone can provide will be greatly appreciated.
06:27 samppah rocketeer: i'd recommend to upgrade to gluster 3.4 and look at PHP APC if possible
06:27 samppah @php
06:27 glusterbot samppah: --fopen-keep-cache
06:27 glusterbot samppah: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
06:27 JoeJulian ~latest | rocketeer
06:27 glusterbot rocketeer: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
06:30 rocketeer thanks guys. :)
06:38 ricky-ticky joined #gluster
06:38 pkoro joined #gluster
06:40 ababu joined #gluster
06:52 hagarth joined #gluster
06:54 cekstam joined #gluster
06:57 rastar joined #gluster
06:58 ekuric joined #gluster
07:06 pkoro joined #gluster
07:08 dneary joined #gluster
07:11 eseyman joined #gluster
07:17 ctria joined #gluster
07:17 anands joined #gluster
07:18 blook joined #gluster
07:21 keytab joined #gluster
07:25 hybrid5121 joined #gluster
07:26 KORG joined #gluster
07:31 andreask joined #gluster
07:33 glusterbot New news from newglusterbugs: [Bug 1020154] RC bug: Total size of master greater than slave <http://goo.gl/aZbJan>
07:34 nishanth joined #gluster
07:34 marcoceppi joined #gluster
07:36 satheesh joined #gluster
07:37 andreask1 joined #gluster
07:43 mgebbe_ joined #gluster
07:43 mgebbe_ joined #gluster
07:47 mgebbe_ joined #gluster
07:52 rgustafs joined #gluster
08:10 rastar joined #gluster
08:11 hagarth joined #gluster
08:14 calum_ joined #gluster
08:14 giannello joined #gluster
08:17 anands joined #gluster
08:18 calum_ I need to explore the realms of distributed storage as my client has 100,000's of large photographs they need to store. The are currently using a windows sbs 2003 server and it is really struggling to cope. They run a windows network and we need to be able to set user permissions as if it was a windows server. Backup is also a major concern of mine. HOW Can you backup distributed storage??. Can gluster do all of this?
08:28 kshlm joined #gluster
08:30 mjrosenb calum_: personally, I don't think that distributed storage is necessary for that amount of data.
08:30 andreask joined #gluster
08:30 andreask joined #gluster
08:38 ndevos calum_: geo-replication is often used for backing up to "the cloud"
08:43 Norky joined #gluster
08:44 ninkotech joined #gluster
08:44 calum_ mjrosenb: The trouble is it is growing by 3-4 GB per day at the moment and it could possibly turn into 30-40gb per day
08:45 calum_ ndevos: With geo-replication does it work like rsync (only copying changed blocks) or does it move the whole file. I am concerned about bandwidth
08:46 ninkotech_ joined #gluster
08:48 vpshastry joined #gluster
08:49 ndarshan joined #gluster
08:51 StarBeast joined #gluster
08:53 dusmant joined #gluster
08:58 ndevos calum_: it uses rsync in the background :)
08:58 calum_ ndevos: :)
09:01 calum_ ndevos: Should I be concerned about things like this? http://cedarandthistle.wordpress.​com/2012/03/31/glusterfs-madness/
09:01 glusterbot <http://goo.gl/byghLx> (at cedarandthistle.wordpress.com)
09:01 ctria joined #gluster
09:02 StarBeas_ joined #gluster
09:02 aravindavk joined #gluster
09:03 ctria joined #gluster
09:03 meghanam joined #gluster
09:03 meghanam_ joined #gluster
09:07 DV__ joined #gluster
09:08 kanagaraj joined #gluster
09:08 RameshN joined #gluster
09:08 shruti joined #gluster
09:12 rastar joined #gluster
09:23 Shri748 joined #gluster
09:23 meghanam joined #gluster
09:23 meghanam_ joined #gluster
09:44 kevein_ joined #gluster
09:44 ngoswami joined #gluster
09:45 sgowda joined #gluster
09:46 sgowda joined #gluster
09:49 saurabh joined #gluster
09:51 ndarshan joined #gluster
09:51 dusmant joined #gluster
10:01 F^nor joined #gluster
10:03 psharma joined #gluster
10:08 glusterbot New news from resolvedglusterbugs: [Bug 1006813] xml output of gluster volume 'rebalance status' and 'remove-brick status' have missing status information in section <http://goo.gl/FJkwG7>
10:09 F^nor joined #gluster
10:13 hagarth @channelstats
10:13 glusterbot hagarth: On #gluster there have been 194712 messages, containing 8023055 characters, 1339441 words, 5329 smileys, and 715 frowns; 1192 of those messages were ACTIONs. There have been 77786 joins, 2422 parts, 75367 quits, 23 kicks, 173 mode changes, and 7 topic changes. There are currently 214 users and the channel has peaked at 239 users.
10:14 ctria joined #gluster
10:14 mbukatov joined #gluster
10:28 RameshN joined #gluster
10:35 vpshastry joined #gluster
10:37 asias joined #gluster
10:42 aravindavk joined #gluster
10:48 ngoswami joined #gluster
10:52 ndarshan joined #gluster
10:52 itisravi @help
10:52 glusterbot itisravi: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin.
10:53 kkeithley1 joined #gluster
10:57 mbukatov joined #gluster
11:05 psharma joined #gluster
11:06 rastar joined #gluster
11:12 harish joined #gluster
11:13 jtux joined #gluster
11:21 blook2nd joined #gluster
11:34 glusterbot New news from newglusterbugs: [Bug 1020270] Rebalancing volume <http://goo.gl/tU7NWM>
11:36 jkroon joined #gluster
11:36 jkroon hi guys, having problems with glusterfs 3.4 starting up
11:37 jkroon which of the log files should I check for what's going one?
11:38 glusterbot New news from resolvedglusterbugs: [Bug 837495] linux-aio support in storage/posix <http://goo.gl/1IxXh>
11:38 ndarshan joined #gluster
11:39 jkroon nm ... 0-rpc-transport: /usr/lib64/glusterfs/3.4.0/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
11:40 kshlm jkroon: that's normal. rdma missing will not prevent glusterd from starting.
11:40 kshlm there must be something else in the log.
11:41 jkroon yea, ip change ... failed.
11:41 jkroon seems a few folders need renaming too.
11:42 jkroon let me try that quick
11:43 jkroon sweet.
11:43 jkroon that was stupid
11:43 jkroon btw, rdma == infiniband related?
11:44 kshlm yes
11:49 spandit joined #gluster
11:50 tziOm joined #gluster
11:50 BAndrade joined #gluster
11:51 ndarshan joined #gluster
11:52 DV__ joined #gluster
11:54 andreask joined #gluster
11:57 vynt joined #gluster
11:58 Shri joined #gluster
12:01 BAndrade hey, im starting analysing glusterfs and i wish to know if there is any good tutorial out there...
12:02 jkroon quick install guide on the site works quite well, and quite a lot of documentation.
12:03 stickyboy joined #gluster
12:03 stickyboy joined #gluster
12:05 jkroon kshlm, ok, mounting is now failing.  it's moaning about connection refused, trying to connect to the right IP.  gluster volume status is (according to my understanding) reporting correctly that both bricks is listening on port 49152, however, only one of the bricks is actually listening on that port
12:07 marbu joined #gluster
12:09 jkroon [2013-10-17 12:07:09.778471] E [mount.c:117:fuse_mount_fusermount] 0-glusterfs-fuse: Mounting via helper utility (unprivileged mounting) is supported only if glusterfs is compiled with --enable-fusermount <-- this would prevent mounting?
12:10 kshlm If the bricks are on the same server, then obviouisly both of them cannot listen on the same port. Glusterd must have somehow screwed up assiging ports.
12:10 jkroon well, they're on two different machines.
12:10 kshlm if not check the brick log for the brick which is not listeneing on the port.
12:10 kshlm fusermount is not required as well.
12:11 jkroon thanks.  the next line says "mount of 192.168.1.1:brickname to /path (...) failed
12:12 kkeithley1 you don't have a firewall running on the second machine by any chance?
12:13 jkroon yes I do, but it accepts everything on the interface in question
12:13 jkroon let met shut it just to be 100 % sure anyway
12:14 jkroon /usr/sbin/glusterfsd -s 192.168.1.2 --volfile-id stage-cfg.192.168.1.2.mnt-glustercfg -p /var/lib/glusterd/vols/stage-cfg/r​un/192.168.1.2-mnt-glustercfg.pid -S /var/run/0b7c6ca58faec9b54ff2082810d0e499.socket --brick-name /mnt/glustercfg -l /var/log/glusterfs/bricks/mnt-glustercfg.log --xlator-option *-posix.glusterd-uuid=7de545f​5-ea70-4784-81ab-8aa99df41b4f --brick-port 49152 --xlator-option stage-cfg-server.listen-port=49152
12:14 jkroon that's the process that's supposed to listed, but netstat doesn't report it listening.
12:15 kshlm that is the brick.
12:15 kshlm does /var/log/glusterfs/bricks/mnt-glustercfg.log say anything?
12:15 jkroon 0-stage-cfg-client-1: readv failed (No data available) <-- bunch of those
12:18 B21956 joined #gluster
12:20 kshlm "0-stage-cfg-client-1" < this is strange, bricks shouldn't be having client translators.
12:22 jkroon o.O?!
12:22 jkroon client translator to the other bricks?
12:22 jkroon for replication?
12:23 hagarth joined #gluster
12:23 jkroon Server and Client lk-version numbers are not same, reopening the fds
12:23 glusterbot jkroon: This is normal behavior and can safely be ignored.
12:24 jkroon lol, ok thanks :)
12:24 ricky-ticky joined #gluster
12:24 kshlm nope, replication is done client side.
12:24 jkroon ok, you clearly understand gluster better than i do.
12:25 kshlm seems like the brick got the wrong volfile from glusterd.
12:25 kshlm the log file should have the volfile dump, between two +-----------------------+ lines, what does that say
12:25 kshlm :) well I am one of the developers, so I should be knowing this stuff.
12:27 chirino joined #gluster
12:29 jkroon http://pastebin.com/TQN8RxKa
12:29 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:30 jkroon http://fpaste.org/47463/12975138/
12:30 glusterbot Title: #47463 Fedora Project Pastebin (at fpaste.org)
12:32 jkroon ok, how can I scope out the brick configs on the two machines?
12:32 jkroon in /var/lib/gluster there seems to be a bunch of folders in which I can drill down ... not sure where to head down
12:34 jkroon the files under bricks looks identical in both cases.
12:35 jkroon kshlm, ??
12:36 fracky joined #gluster
12:38 kshlm so the brick was given the client volfile.
12:38 jkroon ok, so how do i fix that?
12:38 jkroon or rather, how do I trace why it gets a client volfile instead of a brick file?
12:40 jkroon yea, the logs on 192.168.1.1 and .2 looks somewhat different.
12:42 kshlm killing the process and doing a 'gluster volume start <name> force' should fix this,
12:42 jkroon unless the config file is somehow wrong ...
12:43 samsamm joined #gluster
12:43 jkroon grr, when I changed some filenames due to IP change I screwed up :(
12:44 jkroon but alas, it still fails.
12:44 jkroon ok, so do I force start it with glusterd running, but glusterfsd processes killed?
12:46 kshlm glusterd should be running for running any 'gluster' commands.
12:46 kshlm first regenrate the volfiles using 'gluster volume reset'
12:46 kshlm then kill the wayward brick process.
12:46 kshlm then do 'gluster volume start <name> force' to start the brick process again,
12:46 andreask1 joined #gluster
12:47 kshlm this should get the brick back up properly.
12:47 jkroon ok, what does the reset actually do?
12:47 jkroon the start .. force obviously just starts the volume again
12:48 kshlm reset would remove any of options set on the volume, and regenerate the volfiles.
12:48 jkroon what's the difference between the glsuterfsd and glusterfs processes?
12:48 jkroon ah ok, just need to execute on one of the machines to have it done on both?
12:48 kshlm yes
12:49 kshlm glusterfsd is the brick process, glusterfs is the client process and glusterd is management process.
12:50 jkroon ok, so just starting glusterd should not actually result in any fs processes being started the way I understand it - mounting the filesystem will have the fs process spawned and end up connecting that to fsd?
12:50 asias joined #gluster
12:51 bet_ joined #gluster
12:51 jkroon kshlm, ok, so that now has one node listening on port 49152 and the other on 39153, which is good.
12:51 jkroon but what the heck did I actually just do?
12:52 jkroon Mount failed. Please check the log file for more details.
12:52 jkroon ok, there is quite a bunch of logs at this point, the ones for the bricks, and the client logs as well somewhere
12:54 kshlm the client logs will be <logdir>/<mountpoint>.log
12:54 jkroon ok, what am I looking for, E lines?
12:54 kshlm the brick logs will be in <logdir>/bricks
12:55 kshlm mostly the E lines.
12:55 edward2 joined #gluster
12:55 kshlm there might be some spurious E's as you've already experienced.
12:56 jkroon [2013-10-17 12:55:38.704974] E [mount.c:117:fuse_mount_fusermount] 0-glusterfs-fuse: Mounting via helper utility (unprivileged mounting) is supported only if glusterfs is compiled with --enable-fusermount
12:56 jkroon [2013-10-17 12:55:38.705049] E [mount.c:298:gf_fuse_mount] 0-glusterfs-fuse: mount of 192.168.1.1:stage-cfg to /etc/uls/.shared (default_permissions,noatime,nodev,nosu​id,noexec,allow_other,max_read=131072) failed
12:57 kshlm are you mounting as root?
12:57 jkroon yes
12:57 jkroon from /etc/fstab
12:58 spandit joined #gluster
12:58 jkroon 192.168.1.2:stage-cfg /etc/uls/.shared glusterfs _netdev,noatime,nodev,nosuid,noexec 0 0
12:59 jkroon the _netdev label is just there to have it mount with the other network filesystems during bootup instead of trying to mount with other local filesystems.
12:59 jkroon most of the other options is handled by the kernel already afaik, even though normally the fuse process moans a little about it.
13:00 jkroon btw, after spitting out those two error lines it just says 0-daemonize: mount failed, and then continues to output about another 60 lines or so of stuff
13:00 jkroon mostly I lines
13:00 jkroon [2013-10-17 12:55:38.719435] W [glusterfsd.c:1002:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x7f281567498d] (-->/lib64/libpthread.so.0(+0x8ec6) [0x7f281593aec6] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xc5) [0x407a35]))) 0-: received signum (15), shutting down
13:01 lalatenduM_ joined #gluster
13:03 ndarshan joined #gluster
13:03 rastar jkroon, please paste the line from /etc/fstab for gluster mount
13:04 jkroon 192.168.1.2:stage-cfg /etc/uls/.shared glusterfs _netdev,noatime,nodev,nosuid,noexec 0 0
13:07 kaushal_ joined #gluster
13:12 jkroon [2013-10-17 13:11:43.291657] I [mount.c:290:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument), retry to mount via fusermount
13:12 jkroon ok, that seems informational from the I status, but it seems potentially serious, why would direct mounting fail?
13:13 ppai joined #gluster
13:15 ndevos jkroon: can you remove the noatime,nodev,nosuid,noexec options from the /etc/fstab line and try again?
13:15 ndevos jkroon: what version of glusterfs is thatmm and on what distribution/kernel?
13:15 jkroon that works ...
13:16 jkroon ok, that's scary, considering what that filesystem gets exposed to I'd really like those options ...
13:16 jkroon well, potentially exposed to.
13:16 jkroon i just noticed 3.4.1 is out ...
13:16 kaushal_ joined #gluster
13:17 ndevos jkroon: sounds like bug 853895, but there was one for rhel5 too, fixed more recently
13:17 glusterbot Bug http://goo.gl/xCkfr medium, medium, ---, csaba, CLOSED CURRENTRELEASE, CLI: read only glusterfs mount fails
13:18 ndevos ah, thats bug 996768
13:18 glusterbot Bug http://goo.gl/hmOKdK high, unspecified, ---, ndevos, MODIFIED , glusterfs-fuse3.4.0 mount option for read-only not functional on rhel 5.9
13:20 jkroon could be either one, related perhaps.
13:20 jkroon will see if an upgrade to 3.4.1 helps the situation, but that won't happen today.
13:20 jkroon thanks for the help
13:20 ndevos yes, the first contains a fix for recent kernels, the 2nd for < 2.6.21
13:21 ndevos and 3.4.1 should contain both fixes
13:23 CheRi joined #gluster
13:23 psharma joined #gluster
13:23 jkroon thanks ndevos
13:23 jkroon i would not have slept tonight ...
13:24 ndevos you're welcome, jkroon :)
13:24 dbruhn Well I have figured out an RDMA issue?
13:24 samsamm joined #gluster
13:25 jkroon dbruhn, i wish i could get some infiniband kit to play with
13:25 ndk joined #gluster
13:26 dbruhn apparently in Redhat at least if TCP/IP is enabled and dhcp is configured but there is not DHCP server, when the dhcp daemon scans for servers on interval it causes the RDMA to hiccup as well, if the cluster is larger than 6 servers this causes a time out issue and the client will get a 'No end point" error
13:26 lpabon joined #gluster
13:26 dbruhn those are obviously on the Infiniband card for the TCP/IP
13:27 dbruhn and jkroon, you can get DDR IB hardware pretty cheap, I got a switch and 10 cards for my test system for less than a $1000 USD
13:28 dbruhn my QDR stuff I run in production is quite a bit more expensive, but still cheaper than 10GB
13:35 primusinterpares joined #gluster
13:44 lalatenduM joined #gluster
13:46 vynt joined #gluster
13:48 mbukatov joined #gluster
13:50 hagarth joined #gluster
13:51 jkroon dbruhn, at the moment that's money I need to spend on other projects :)
13:53 dbruhn jkroon, fair enough, I know how that goes all too well
13:55 cyberbootje joined #gluster
14:05 bugs_ joined #gluster
14:05 mtanner joined #gluster
14:09 glusterbot New news from resolvedglusterbugs: [Bug 996768] glusterfs-fuse3.4.0 mount option for read-only not functional on rhel 5.9 <http://goo.gl/hmOKdK>
14:13 failshell joined #gluster
14:15 Debolaz joined #gluster
14:16 kaptk2 joined #gluster
14:18 jclift_ joined #gluster
14:18 psharma joined #gluster
14:19 wushudoin joined #gluster
14:34 jag3773 joined #gluster
14:41 daMaestro joined #gluster
14:42 RedShift joined #gluster
14:43 phox joined #gluster
14:49 johnmark semiosis: ping
14:50 kkeithley_ FYI, Samba 4.1.0 is out and has the new glusterfs vfs plug-in. However the Fedora RPMs for f20 and f21/rawhide don't have the plug-in built by default. It also doesn't appear that they're going to do 4.1.0 for f19 and f18.
14:50 johnmark kkeithley_: thanks for the heads up
14:50 * johnmark wonders who does SAMBA packaging for fedora
14:50 kkeithley_ So, to address that, I've uploaded sets of Samba 4.1.09 RPMs for f18-f21 to download.gluster.org
14:50 kkeithley_ hang on for the link
14:50 kkeithley_ and I've already filed BZs to turn the glusterfs vfs plug-in on
14:51 kkeithley_ http://download.gluster.org/​pub/gluster/glusterfs/samba/
14:51 glusterbot <http://goo.gl/CpU5tV> (at download.gluster.org)
14:52 kkeithley_ s/ that they're going to do/ that they're not going to do/
14:52 glusterbot What kkeithley_ meant to say was: FYI, Samba 4.1.0 is out and has the new glusterfs vfs plug-in. However the Fedora RPMs for f20 and f21/rawhide don't have the plug-in built by default. It also doesn't appear that they're not going to do 4.1.0 for f19 and f18.
14:52 kkeithley_ er, doesn't ... not. Nope, not what I meant
14:52 elyograg if I wanted to do Samba 4.1.0 on CentOS, I assume I'd be installing from source, not using packages?
14:53 johnmark elyograg: correct
14:53 johnmark elyograg: you can use the repo on teh forge to compile samba 3.6.x with the vfs plugin
14:53 kkeithley_ Koji wouldn't build el6 RPMs due to missing dependencies.
14:54 hagarth joined #gluster
14:54 elyograg johnmark: are there good instructions posted somewhere?  I'm not sure whether I'd want to do the plugin with the packaged version or just go with the latest.
14:55 johnmark elyograg: see https://forge.gluster.org/samba-glusterfs
14:55 glusterbot Title: Samba-Gluster Integration - Gluster Community Forge (at forge.gluster.org)
14:56 kkeithley_ Koji wouldn't build el6 RPMs due to missing dependencies, stuff like ctdb-devel, among one or two others.
14:59 elyograg i keep getting asked about exposing our volumes to S3 clients.  It's been a while since I asked about this.  how's the swift integration these days?
15:00 kkeithley_ But fwiw, the most active Samba packagers seems to be asn and gd, which I guess to be Andreas Schneider and Günther Deschner. The BZs I filed are — by default — assigned to Günther.
15:02 kkeithley_ @learn samba as Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://download.gluster.org/​pub/gluster/glusterfs/samba/
15:02 glusterbot kkeithley_: The operation succeeded.
15:03 kkeithley_ ,,(samba)
15:03 glusterbot Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://goo.gl/CpU5tV
15:06 ekuric1 joined #gluster
15:08 __Bryan__ joined #gluster
15:08 ekuric1 left #gluster
15:11 kkeithley_ trying to build samba 4.1.0 on real RHEL doesn't work either for missing different dependencies. And yes, I have EPEL.
15:11 johnmark Ugh... ubuntu 13.10 is still shipping with GlusterFS 3.2.7 in the universe pool
15:11 johnmark semiosis: I think we should create a glubuntu spin or something
15:12 ndevos johnmark: you should use the ,,(ppa) :P
15:12 glusterbot johnmark: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
15:12 johnmark ndevos: yes, well aware of that
15:12 * ndevos knows
15:12 johnmark but I want to publicly shame them for crippling us
15:13 ndevos johnmark: have you filed a bug in the ubuntu launchpad (or wherever) to ask them to update?
15:13 johnmark ndevos: Yes - a long time ago
15:13 ndevos johnmark: shame them!
15:15 calum_ joined #gluster
15:16 pkoro joined #gluster
15:20 kkeithley_ I know it's comparing apples and oranges, but Ubuntu must have something analogous to the Fedora "update" process to stage new bits into existing and upcoming releases. Sounds like that was never done.
15:21 kkeithley_ In Fedora the burden is on the package owner/maintainer to make sure it happenbs.
15:21 kkeithley_ happens
15:21 johnmark kkeithley_: right. and I don't think any of the current packagers for Ubuntu are actively working with us
15:22 johnmark kkeithley_: the PPAs are nice, but they aren't linked to the main package repos
15:22 johnmark marcoceppi: ^^^
15:22 johnmark marcoceppi: let me know what the best way ot navigate hte system is
15:24 marcoceppi johnmark kkeithley_: you'll need to get gluster in to the debian eco system. The easiest way, since we sync a lot of packages from our upstream, is to get gluster regularly updated in debian sid then just open a request for syncing. Alternatively, you can find a MOTU to sponsor your packages in the PPA and get them into our "universe" archive
15:25 johnmark marcoceppi: ok. 3.4.1 is in Debian Sid now - although it probably didn't make it in time for saucy-proposed
15:26 marcoceppi johnmark: considering saucy was released about an hour ago, probably not. You can propose to have it sync'd to saucy-backports and always open the request for the T series
15:27 _zerick_ joined #gluster
15:27 johnmark marcoceppi: ok. How do I do that? Sorry, even though I have been a long-time Ubuntu user, I never quite figured out how to navigate this piece of things
15:28 marcoceppi johnmark: it's fine, I've been doing this for a while and I can't really remember. Let me find the wiki page
15:28 johnmark marcoceppi: thanks :)
15:32 marcoceppi johnmark: I believe this should get you started with the sync request: https://wiki.ubuntu.com/SyncRequestProcess
15:32 glusterbot Title: SyncRequestProcess - Ubuntu Wiki (at wiki.ubuntu.com)
15:32 marcoceppi Once it's sync'd to T series, you can open a backport request to saucy (and I'd recommend precise too)
15:34 semiosis marcoceppi: upstart is the blocker
15:34 marcoceppi semiosis: how so?
15:34 semiosis marcoceppi: debian packages use an initscript, but that's insufficient on ubuntu
15:35 semiosis mounting at boot
15:35 marcoceppi semiosis: ah
15:35 semiosis requires a merge, not a sync, from debian
15:37 marcoceppi semiosis: thanks for the info
15:37 semiosis marcoceppi: maybe you can help sort this out... my issue with contributing to universe is with the process overhead
15:37 semiosis i did it once, it took days
15:38 semiosis whereas i can push a fix to the PPA to fix problems for people in minutes
15:40 semiosis marcoceppi: also, any idea what is up with this? https://blueprints.launchpad.net/ubun​tu/+spec/servercloud-p-glusterfs-mir
15:40 glusterbot <http://goo.gl/H8KSWh> (at blueprints.launchpad.net)
15:40 semiosis seems to be abandoned, for quite some time now
15:41 marcoceppi semiosis: I'll find out if there is any way to expidite the merge process, we have a cloud/server sprint next week
15:42 semiosis johnmark: you should be interested in the MIR i just linked
15:42 bala joined #gluster
15:42 marcoceppi semiosis: well, you said we can't quite MIR because of init. I'll poke Antonio, it was proposed for UDS-P which has since come and gone.
15:44 zaitcev joined #gluster
15:45 kkeithley_ johnmark: if you want to pester Günther and Andreas via the BZs, the numbers are 1020327 and 1020329
15:45 johnmark semiosis: ah, thanks
15:46 johnmark semiosis: excellent....
15:46 johnmark marcoceppi: thanks for your help! would be great to get this kickstarted
15:46 aliguori joined #gluster
15:47 saurabh joined #gluster
15:52 sprachgenerator joined #gluster
15:53 semiosis marcoceppi: johnmark: here's another problem...
15:53 semiosis upgrading
15:53 semiosis say for example 3.5.0 comes out and isn't compatible with 3.4.x
15:54 semiosis i make a new PPA for each minor release (the Y in X.Y.Z) so people have to manually add that to upgrade
15:54 anands joined #gluster
15:54 semiosis what would happen if someone was using glusterfs from universe?
15:55 johnmark semiosis: great question
15:55 johnmark marcoceppi: semiosis: I'm curious what other projects do about this
15:56 marcoceppi johnmark: how does redhat do this in RPM?
15:57 semiosis personally, i think the best practice is to always upgrade glusterfs manually.  and i would even advise people to make their *own* PPA and copy my packages into that, so they have total control (backed up by their own crypto keys) over what gets installed on their servers
15:57 marcoceppi semiosis: well, that won't fly in universe
15:57 semiosis of course
15:58 semiosis which is why the latest glusterfs isn't in universe :)
15:58 marcoceppi :)
15:59 johnmark heh
15:59 johnmark marcoceppi: in Fedora-land, we take the latest release and target it for the next Fedora release
15:59 semiosis so, if johnmark wants someone to shame for ubuntu universe not having the latest glusterfs...
15:59 johnmark marcoceppi: and then it goes into the next CentOS release - eventually
15:59 * semiosis gets ALL THE SHAME
16:00 johnmark LOL
16:00 johnmark semiosis: no good deed goes unpunished? LOL
16:00 semiosis :D
16:01 johnmark semiosis: it would be hard for me to blame the guy that produces all of our Ubuntu builds :)
16:01 marcoceppi Well, having a blessed PPA isn't /that/ bad, bth
16:01 marcoceppi tbh*
16:01 semiosis lots of other nice projects do that too
16:01 kkeithley_ gluster doesn't get into CentOS because of anything we (and by we I mean I) do in Fedora
16:02 johnmark marcoceppi: true. But I want GlusterFS to be more easily installed for the Ubuntu base
16:02 johnmark kkeithley_: true. well, sort of.
16:02 marcoceppi johnmark: definitely
16:02 johnmark marcoceppi: like, eg. Ceph
16:02 johnmark just to pick one project at random
16:02 johnmark haha
16:02 marcoceppi ;)
16:02 johnmark :)
16:03 kkeithley_ that was totally random
16:03 johnmark kkeithley_: absolutely! ;)
16:03 kkeithley_ or maybe virtually random
16:03 marcoceppi I'll chat about how we deal with non-backwards compat releases, but honestly what you'll probably find is you create a metapackage for glusterfs, then have glusterfs3.4 glusterfs3.5 that all provide glusterfs
16:03 johnmark marcoceppi: huh, ok
16:04 johnmark marcoceppi: and the metapackage has all the packaging dependencies?
16:04 semiosis marcoceppi: and that is something that should be done in debian (too)
16:04 marcoceppi johnmark: forget I said metapacakge, that was the wrong term
16:04 johnmark marcoceppi: ok :)
16:04 johnmark marcoceppi: in the rubygems universe, there's the concept of bundles
16:04 johnmark which list out all the exact versions of packages/gems to install
16:05 johnmark is that similar?
16:05 johnmark anyway, gotta run. be back in an hour
16:06 marcoceppi semiosis: johnmark: I'll give you an example, we have juju-0.6 juju-0.7 and juju-core packages, they all provide "juju", when you apt-get install juju you get juju-core (latest release) the other two 0.6 0.7 are old release before we re-wrote in golang. You could do the same where "glusterfs" will always install the latest glusterfsVER package, but users can choose to stay on an "older" version of glusterfs
16:06 marcoceppi again, need to double check, but that's my assumption
16:15 shylesh joined #gluster
16:16 semiosis marcoceppi: upgrading to kubuntu 13.10 now :D
16:17 johnmark semiosis: w00t
16:19 pono left #gluster
16:20 zerick joined #gluster
16:22 Debolaz joined #gluster
16:24 ctria joined #gluster
16:26 Mo__ joined #gluster
16:31 dusmant joined #gluster
16:34 fuzzy_id joined #gluster
16:34 fuzzy_id hi there
16:35 fuzzy_id just found out that i have lots of ' E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused)' entries in etc-glusterfs-glusterd.vol.log
16:35 fuzzy_id wondering where they are coming from
16:36 fuzzy_id tcpdump only shows connections between the two peers which see each other
16:36 fuzzy_id afaik there is no client running…
16:37 DV joined #gluster
16:38 LoudNoises joined #gluster
16:46 Excolo joined #gluster
16:48 Excolo Ok, so to make a long story as short as possible and get to my question: I have a gluster setup of 2 servers in a replication setup. I killed gluster on both servers and tried changing from ext4 to xfs on one of the servers (to get around the ext4 bug). This pretty much trashed everything in my partition that /export is using. My question is, if I delete everything in /export on the corrupted server, bring it back up, and do a rebalanc
16:48 Excolo e will it rebuild the volume on that server?
16:54 fuzzy_id ext4 bug?
16:55 Excolo http://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/
16:55 glusterbot <http://goo.gl/PEBQU> (at joejulian.name)
16:57 Excolo essentially though, as far as my question is concerned, dont worry about that. The damage has been done. Essentially now I'm trying to repair the damage
16:57 Excolo and I THINK essentially wiping the brick, and doing a rebalance would work... but in all honesty i really have no idea
17:03 semiosis Excolo: rebalance is not the right thing to do in this case
17:03 semiosis consider this strategy...
17:05 semiosis leave gluster off for now.  take the "trashed" brick and format it with XFS.  use rsync (with -aHAX --whole-file --inplace options, among others you might also want) to copy data from the "good" ext4 brick to the new clean XFS brick
17:05 semiosis then go the opposite direction, reformatting the remaining ext4 brick with XFS and rsyncing the data back onto it
17:06 [o__o] left #gluster
17:06 semiosis you should have backups of your data before trying this!
17:06 Excolo I already did before I messed with the filesystem
17:07 Excolo problem is, last time I had to rebuild gluster it took like 4 days
17:07 Excolo so im trying to avoid that
17:07 Excolo (a little backstory, I inherrited a bit of a nightmare with the whole situation. it's not being used as it should, running on ext4 across datacenters in different states because it needs a master master like setup)
17:08 [o__o] joined #gluster
17:08 Excolo I'll try that, but I have a feeling its going to take a while (we tried doing that with one of our backup servers doing the rsync -aHAX stuff and I think it was took about a full day?)
17:09 Excolo still better than rebuilding from scratch
17:09 Excolo thanks
17:10 Excolo and now running that, it should include the .glusterfs dir right?
17:11 semiosis i think so
17:21 JoeJulian @ext4
17:21 glusterbot JoeJulian: Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
17:21 JoeJulian @forget ext4
17:21 glusterbot JoeJulian: The operation succeeded.
17:22 lalatenduM joined #gluster
17:22 JoeJulian Hmm, should I bother re-learning that one since it's been "fixed"?
17:23 semiosis yes
17:30 andreask joined #gluster
17:30 B21956 left #gluster
17:30 JoeJulian @learn ext4 as The ext4 bug has been fixed in 3.3.2 and 3.4.0. Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
17:30 glusterbot JoeJulian: The operation succeeded.
17:35 wushudoin joined #gluster
17:35 glusterbot New news from newglusterbugs: [Bug 985957] Rebalance memory leak <http://goo.gl/9c7EQ>
17:36 rob99 joined #gluster
17:38 rob99 hello!
17:38 glusterbot rob99: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:45 rob99 I'm having trouble with GlusterFS 3.2 on a CentOS 6.4 box. I follow the quick start guide to the letter, but cannot perform the test mount of my volume in Step 7. The mount appears to succeed, but any attempts to access the volume hang with no error, and generate RPC related errors in the server log (I can post if necessary). Can anyone help troubleshoot this?
17:46 kkeithley_ ,,(repo)
17:46 glusterbot I do not know about 'repo', but I do know about these similar topics: 'git repo', 'ppa repo', 'repos', 'repository', 'yum repo'
17:47 kkeithley_ 3.2 is pretty old. Suggest you try with 3.4.1 from ,,(yum repo)
17:47 glusterbot The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
17:48 kkeithley_ ,,(repos)
17:48 glusterbot See @yum, @ppa or @git repo
17:48 kkeithley_ ,,(yum)
17:48 glusterbot The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://goo.gl/42wTd5 The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
17:49 kkeithley_ @forget "yum repo"
17:49 glusterbot kkeithley_: The operation succeeded.
17:49 kkeithley_ @learn yum repo as The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://goo.gl/42wTd5 The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
17:49 glusterbot kkeithley_: The operation succeeded.
17:50 davinder joined #gluster
17:52 SteveCooling joined #gluster
17:53 rob99 I only used 3.2 since that's what's available with the distro. Was there a known issue with 3.2 that caused it to not allow volumes to be mounted?
17:55 kkeithley_ no, but it's sufficiently old that if you run into other problems you're going to be told to update, so might as well get that out of the way.
17:56 rob99 I see. Ok, I'll go ahead and perform the upgrade and report back if that doesn't resolve my issue. Thanks for the suggestion.
18:04 vpshastry joined #gluster
18:04 vpshastry left #gluster
18:06 glusterbot New news from newglusterbugs: [Bug 1003184] EL5 package missing %_sharedstatedir macro <http://goo.gl/Yp1bL1>
18:06 bennyturns joined #gluster
18:06 anands joined #gluster
18:12 Technicool joined #gluster
18:17 rwheeler joined #gluster
18:19 XpineX_ joined #gluster
18:27 kkeithley_ johnmark: what do you suppose happened to that CentOS guy, was it Karanbir Singh, that was going to package newer versions of gluster in some CentOS yum repo?  E.g. the centosplus repo?
18:27 kkeithley_ Doesn't seem to have ever happened.
18:28 jclift_ Oh, was he going to do that?
18:28 * jclift_ knows KB
18:28 jclift_ I can definitely ping him
18:29 jclift_ That reminds me, he was suggesting the creation of a "Distributed Storage meetup" or similar for London too.  I was going to catch up with him about that.
18:30 jclift_ kkeithley: He was very sick recently though.  Travelled way too much giving presentations in different countries, didn't get anywhere near enough sleep.  Sick for something like 9 days.
18:30 jclift_ kkeithley: I think he's recovered now tho.
18:30 * jclift_ will see him at LinuxCon next week I think (unsure)
18:30 jclift_ LinuxCon EU that is
18:31 kkeithley_ Pretty sure it was him. I personally subscribe to the "I can sleep when I'm dead" philosophy
18:31 jclift_ Heh
18:31 jclift_ kkeithley: https://twitter.com/kbsingh
18:31 glusterbot Title: Karanbir Singh (kbsingh) on Twitter (at twitter.com)
18:33 vpshastry joined #gluster
18:33 kkeithley_ yeah, looking through old emails, he's the guy. I guess he got busy and it never happened. Thus we have people installing CentOS and Gluster 3.2. Well, even if newer versions were in centosplus we'd probably still have that happening.
18:33 johnmark kkeithley_: yeah - will try to sync up with him ASAP
18:33 johnmark kkeithley_: but there are other efforts as well
18:35 XpineX joined #gluster
18:37 kkeithley_ oh, such as? Since I build the EPEL packages we have on download.gluster.org, I'm curious what else is out there. Might be good to teach glusterbot about them! (And remember that the reason we don't have newer versions in EPEL proper is because HekaFS is in EPEL, and once RHEL 6.5 ships we'll have to withdraw it anyway.)
18:43 * jclift_ wonders if there's a way we can just get centosplus to auto-pull in the Gluster.org built packages or something
18:49 kkeithley_ yes, that would be a reasonable thing to do
18:50 kkeithley_ would be -> might be
18:53 semiosis jclift_: kkeithley_: kbsingh is here
18:54 jclift_ Cool. :)
18:55 kkeithley_ indeed. don't know why I didn't see him when I looked.
18:59 andreask joined #gluster
18:59 semiosis also sometimes (though not lately) as z00dax or something like that
19:06 khushildep joined #gluster
19:19 calum_ joined #gluster
19:19 Debolaz joined #gluster
19:49 bogen1 joined #gluster
19:54 dneary joined #gluster
20:00 Excolo joined #gluster
20:06 glusterbot New news from newglusterbugs: [Bug 1004519] SMB:smbd crashes while doing volume operations <http://goo.gl/DMsNHh>
20:09 tg2 question, has anybody tested with a 3.3.1 server and 3.4 clients?
20:09 tg2 was the 'compatability' code in 3.3.x thorough enough to allow that
20:09 tg2 or better to do servers on 3.4 first then clients
20:09 bogen1 tg2: interesting, I was just about to try that (well, the other way around)
20:10 bogen1 (servers on 3.4.x clients on 3.3.x)
20:14 zerick joined #gluster
20:19 semiosis tg2: afaik servers need to be upgraded before clients.  check out ,,(3.4 upgrade notes)
20:19 glusterbot tg2: http://goo.gl/SXX7P
20:21 badone joined #gluster
20:23 wica joined #gluster
20:23 kkeithley_ 3.3 and 3.4 are ostensibly compatible. 3.2 is not compatible with 3.3 or 3.4. And yes, upgrade servers first as per the upgrade notes
20:26 wica Hi, I used had a "cpu stuck" on als my cores on 1 of 6 peers. A kernel bug, so ni gfs bug. But is there a way to detect a problem with a peer. And drop this one?
20:26 T0aD joined #gluster
20:26 wica s/ni/not/
20:26 glusterbot What wica meant to say was: Hi, I used had a "cpu stuck" on als my cores on 1 of 6 peers. A kernel bug, so not gfs bug. But is there a way to detect a problem with a peer. And drop this one?
20:26 wica *lol*
20:27 semiosis some problems are detected, but not all
20:27 semiosis if you can reproduce a problem that causes gluster to stop working please file a bug
20:27 glusterbot http://goo.gl/UUuCq
20:27 wica this one whas blocking io on al the bricks on that peer. But the volumes had issues with it.
20:28 wica semiosis: I have some logs. tgf remote syslog ;)
20:28 wica will try to make a report
20:29 \_pol joined #gluster
20:30 \_pol_ joined #gluster
20:36 bogen1 I keep getting "0-glusterfs: failed to get the 'volume file' from server, 0-mgmt: failed to fetch volume file (key:/pool62)" when trying to access 3.4.1 from 3.3.2. Accessing 3.4.1 from 3.4.1 of course works for me.
21:01 F^nor joined #gluster
21:19 DV joined #gluster
21:29 premera joined #gluster
21:37 fidevo joined #gluster
21:43 AndreyGrebenniko joined #gluster
21:49 SpeeR joined #gluster
21:55 JoeJulian @forget yum repo
21:55 glusterbot JoeJulian: The operation succeeded.
21:55 JoeJulian @alias "yum" "yum repo"
21:55 glusterbot JoeJulian: The operation succeeded.
22:15 nasso joined #gluster
22:27 kkeithley_ fyi ,,(samba)
22:27 glusterbot Samba 4.1.0 RPMs for Fedora 18, 19, 20, 21/rawhide, with the new glusterfs vfs plug-in, are available at http://goo.gl/CpU5tV
22:29 JoeJulian cool
22:29 JoeJulian What about the samba4 packages for EL6?
22:31 kkeithley_ 4.1.0 has dependencies that don't exist for EL6. Seems to be a slightly different set depending on whether I build in Koji for EPEL or build on RHEL/CentOS. (One of them in Koji is glusterfs-api{,-devel}, which is not a surprise)
22:33 kkeithley_ i.e. they don't exist in either the RHEL packages or in EPEL either.
22:34 kkeithley_ When I have some time I'll build the missing packages and then samba for RHEL; I didn't have time to do that today.
22:35 kkeithley_ and actually I didn't try on CentOS; I just presumed they would not.
22:38 JoeJulian Yeah, CentOS is basically just a recompile of RHEL with the trademarks removed so if it doesn't work in RHEL, it won't work in CentOS.
22:44 MrNaviPacho joined #gluster
22:44 bennyturns joined #gluster
22:48 _pol joined #gluster
22:50 JoeJulian hmm, I hadn't noticed that samba4 (4.0) was a RHEL package. I just assumed I was getting it from epel.
23:10 rob99 Hi! I'm receiving the error 'volume start: <vol name>: failed' when I attempt to start my newly created volume. In which log might I find details as to why starting it failed? (version 3.4.1 on CentOS6.4)
23:11 glusterbot New news from resolvedglusterbugs: [Bug 862082] build cleanup <http://goo.gl/pzQv9M>
23:26 zaitcev joined #gluster
23:32 _pol_ joined #gluster
23:39 jbrooks joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary