Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:54 yinyin joined #gluster
00:56 chirino joined #gluster
00:57 hchiramm_ joined #gluster
01:03 mkzero joined #gluster
01:11 mjsmith2 joined #gluster
01:18 jbrooks joined #gluster
01:20 jbrooks left #gluster
01:20 jbrooks joined #gluster
01:32 gdubreui joined #gluster
01:33 Honghui joined #gluster
01:37 harish joined #gluster
01:37 gdubreui joined #gluster
01:39 yinyin joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 coredump joined #gluster
01:59 chirino joined #gluster
02:01 vpshastry joined #gluster
02:06 jbrooks joined #gluster
02:07 jbrooks joined #gluster
02:07 jbrooks left #gluster
02:14 rwheeler joined #gluster
02:19 harish joined #gluster
02:33 gmcwhistler joined #gluster
02:39 nishanth joined #gluster
02:40 hagarth joined #gluster
03:02 gmcwhist_ joined #gluster
03:11 gmcwhistler joined #gluster
03:26 rahulcs joined #gluster
03:27 kanagaraj joined #gluster
03:27 RameshN joined #gluster
03:28 chirino joined #gluster
03:29 shubhendu joined #gluster
03:32 Ark joined #gluster
03:39 bharata-rao joined #gluster
03:41 haomaiwang joined #gluster
03:49 itisravi joined #gluster
03:50 kumar joined #gluster
03:50 rahulcs joined #gluster
03:55 Honghui joined #gluster
03:57 rahulcs joined #gluster
03:58 DV joined #gluster
04:01 ppai joined #gluster
04:10 hagarth joined #gluster
04:12 gmcwhistler joined #gluster
04:22 sudhir joined #gluster
04:23 ndarshan joined #gluster
04:28 ultrabizweb joined #gluster
04:29 chirino joined #gluster
04:37 gmcwhistler joined #gluster
04:48 nishanth joined #gluster
04:49 rahulcs joined #gluster
04:50 wgao joined #gluster
04:51 vpshastry joined #gluster
04:52 Honghui joined #gluster
04:54 arya joined #gluster
04:54 nshaikh joined #gluster
04:57 kdhananjay joined #gluster
05:01 rahulcs_ joined #gluster
05:06 Honghui joined #gluster
05:09 deepakcs joined #gluster
05:12 atinmu joined #gluster
05:13 hchiramm__ joined #gluster
05:16 prasanthp joined #gluster
05:17 davinder joined #gluster
05:19 raghu joined #gluster
05:28 hagarth joined #gluster
05:29 harish joined #gluster
05:31 dusmant joined #gluster
05:32 surabhi joined #gluster
05:36 bala1 joined #gluster
05:44 NCommander joined #gluster
05:52 kanagaraj joined #gluster
05:53 dusmant joined #gluster
05:54 kanagaraj joined #gluster
05:55 overclk joined #gluster
05:56 rjoseph joined #gluster
06:02 chirino joined #gluster
06:04 lalatenduM_ joined #gluster
06:09 jbrooks joined #gluster
06:16 Philambdo joined #gluster
06:20 rjoseph joined #gluster
06:21 ngoswami joined #gluster
06:22 dusmant joined #gluster
06:29 ceiphas joined #gluster
06:31 ceiphas i have a big problem here with my gluster... i have two peers with a brick each that form a volume with replication and one client that mounts that volume via fuse. when i delete a file from the volume on the client the file reappears after about five minutes...
06:34 gmcwhistler joined #gluster
06:47 samppah_ ceiphas: sounds like a split brain
06:47 samppah_ ceiphas: are you using gluster only through fuse mount point?
06:50 d-fence joined #gluster
06:57 monchi joined #gluster
07:01 ceiphas samppah: yes, only fuse mounts
07:01 ktosiek joined #gluster
07:02 chirino joined #gluster
07:03 glusterbot New news from resolvedglusterbugs: [Bug 976750] Disabling NFS causes E level errors in nfs.log. <https://bugzilla.redhat.com/show_bug.cgi?id=976750>
07:04 ctria joined #gluster
07:04 samppah ceiphas: what version you are using?
07:04 eseyman joined #gluster
07:04 ceiphas samppah: that was my bug message... i have 3.4.2
07:10 psharma joined #gluster
07:15 samppah ceiphas: oh, did you make bug report of this? i'm sorry i must have missed that
07:15 ceiphas no, i didn't, i just found this bug while searching for a solution for my problem
07:26 ceiphas samppah: i don't know if i have a bug or i am simply stupid, but this reappearing of deleted files problem is really severe
07:27 edward2 joined #gluster
07:28 rahulcs joined #gluster
07:29 samppah ceiphas: nod, that sounds like there is a split brain issue.. can you check log files for more information? client side and server side?
07:30 samppah you can send output of them to pastie.org if you like
07:30 ceiphas samppah: which file?
07:30 samppah or ,,(paste)
07:30 glusterbot For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
07:30 samppah @log
07:30 glusterbot samppah: I do not know about 'log', but I do know about these similar topics: 'Joe's blog', 'chat logs', 'loglevel', 'logstash'
07:30 samppah ceiphas: on EL based distro's client log is /var/log/glusterfs/mnt-point.log
07:30 samppah on serverside it's /var/log/glusterfs/bricks/brick.log
07:31 TvL2386 joined #gluster
07:32 ceiphas the brick log is empty except for one line:  0-glusterfs: No change in volfile, continuing
07:33 ceiphas samppah: same on the client
07:35 samppah ceiphas: ok, anything in serveside in /var/log/glusterfs ?
07:35 samppah there should be self heal etc log files
07:35 ceiphas the glustershd.log has the same content
07:39 liquidat joined #gluster
07:41 samppah hmm, sounds odd
07:41 samppah samething on both servers?
07:42 kanagaraj joined #gluster
07:47 mshadle can gluster 3.3 be upgraded to 3.5 (simple setup, only 2 servers, 2 total volumes) seamlessly?
07:50 ThatGraemeGuy joined #gluster
07:51 glusterbot New news from newglusterbugs: [Bug 1091777] Puppet module gluster (purpleidea/puppet-gluster) to support RHEL7/Fedora20 <https://bugzilla.redhat.com/show_bug.cgi?id=1091777>
07:51 fsimonce joined #gluster
08:01 saurabh joined #gluster
08:03 ceiphas samppah: same thing on all servers
08:04 chirino joined #gluster
08:08 kanagaraj joined #gluster
08:15 ccha2 mshadle: you can't mix <3.3 and 3.4+
08:16 ccha2 youneed a cold upgrade
08:17 aravindavk joined #gluster
08:20 rahulcs joined #gluster
08:24 ceiphas samppah: any idea where i could get logs with errors?
08:30 rahulcs joined #gluster
08:37 hagarth joined #gluster
08:37 ctria joined #gluster
08:39 samppah ceiphas: can you send output of gluster volume info to pastie.org?
08:39 ceiphas samppah: which file should i send?
08:40 samppah ceiphas: run command: gluster volume info
08:43 ceiphas samppah: http://pastebin.com/07h5ZsES
08:43 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
08:44 samppah ceiphas: thanks, also: gluster vol status
08:45 ceiphas samppah: http://fpaste.org/99794/45230313/
08:45 glusterbot Title: #99794 Fedora Project Pastebin (at fpaste.org)
08:48 kanagaraj joined #gluster
08:51 sputnik13 joined #gluster
08:52 sputnik13 joined #gluster
08:52 dusmant joined #gluster
08:52 RameshN joined #gluster
08:54 nishanth joined #gluster
08:57 MrAbaddon joined #gluster
09:03 tryggvil joined #gluster
09:03 sputnik13 joined #gluster
09:05 saravanakumar1 joined #gluster
09:06 kanagaraj joined #gluster
09:09 social joined #gluster
09:11 RameshN joined #gluster
09:12 suliba joined #gluster
09:14 dusmant joined #gluster
09:15 ceiphas samppah: anything wrong?
09:18 calum_ joined #gluster
09:19 ctria joined #gluster
09:21 glusterbot New news from newglusterbugs: [Bug 1095179] Gluster volume inaccessible on all bricks after a glusterfsd segfault on one brick <https://bugzilla.redhat.com/show_bug.cgi?id=1095179>
09:24 wd_ joined #gluster
09:24 wd_ joined #gluster
09:24 nishanth joined #gluster
09:24 nthomas joined #gluster
09:31 rahulcs joined #gluster
09:38 wd_ left #gluster
09:44 Honghui joined #gluster
09:52 ceiphas i have a big problem here with my gluster... i have two peers with a brick each that form a volume with replication and one client that mounts that volume via fuse. when i delete a file from the volume on the client the file reappears after about five minutes... Log files show no errors, self-healing logs, too
09:57 rahulcs joined #gluster
10:00 nshaikh joined #gluster
10:00 rahulcs joined #gluster
10:03 haomaiwang joined #gluster
10:28 rahulcs joined #gluster
10:36 aviksil_ joined #gluster
10:36 badone joined #gluster
10:36 aviksil joined #gluster
10:48 rahulcs joined #gluster
10:51 glusterbot New news from newglusterbugs: [Bug 1091677] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.com/show_bug.cgi?id=1091677>
10:57 MrAbaddon joined #gluster
10:59 diegows joined #gluster
11:02 ira joined #gluster
11:03 Slashman joined #gluster
11:04 tdasilva joined #gluster
11:09 jbrooks joined #gluster
11:11 Honghui joined #gluster
11:19 hagarth joined #gluster
11:21 meghanam joined #gluster
11:21 meghanam_ joined #gluster
11:22 glusterbot New news from newglusterbugs: [Bug 1077516] [RFE] :- Move the container for changelogs from /var/run to /var/lib/misc <https://bugzilla.redhat.com/show_bug.cgi?id=1077516>
11:23 ppai joined #gluster
11:24 rahulcs joined #gluster
11:26 harish joined #gluster
11:30 edward1 joined #gluster
11:32 shubhendu joined #gluster
11:32 Chewi joined #gluster
11:34 rahulcs joined #gluster
11:50 rahulcs joined #gluster
11:51 itisravi_ joined #gluster
11:53 andreask joined #gluster
12:00 jmarley joined #gluster
12:00 jmarley joined #gluster
12:01 davinder joined #gluster
12:03 MrAbaddon joined #gluster
12:07 ppai left #gluster
12:07 rahulcs joined #gluster
12:07 hchiramm__ joined #gluster
12:11 rwheeler joined #gluster
12:17 atrius joined #gluster
12:19 d-fence joined #gluster
12:21 MrAbaddon joined #gluster
12:29 ktosiek what's the fastest way to import data (tons of small files) into a gluster replicated volume?
12:29 ktosiek should I copy them into a mounted volume, or copy them into a brick and then trigger heal?
12:30 ktosiek (I'm using FUSE client, and it's using all the CPU it can find while copying the data in)
12:39 sroy_ joined #gluster
12:40 somepoortech ktosiek: if you copy the files into a brick directory you have to touch the files from the client to get gluster to notice them
12:41 nshaikh joined #gluster
12:41 ktosiek is it enough? Don't I have to set some xattrs?
12:42 ceiphas i have a big problem here with my gluster... i have two peers with a brick each that form a volume with replication and one client that mounts that volume via fuse. when i delete a file from the volume on the client the file reappears after about five minutes... Brick logs show no errors, self healing logs, too
12:42 somepoortech ktosiek: and you will probably want to rebalance after that
12:42 tdasilva left #gluster
12:42 somepoortech ktosiek: I'm no expert, but it has worked for me
12:43 ktosiek rebalance, not heal?
12:43 somepoortech ktosiek: I imagine you want to distribute the files, if you are talking about replicate it should replicate automaticly once you do something like cat the file
12:44 somepoortech ceiphas: sounds like you have a split brain issue
12:44 ceiphas somepoortech: how do i diagnose this problem, and how do i solve it?
12:45 somepoortech ceiphas: does `gluster volume heal <volume> info split-brain` have any info in it?
12:45 ceiphas nope... should i delete the file and check it again?
12:46 plarsen joined #gluster
12:47 somepoortech ceiphas: if it's not a split brain I'm not sure what's going on
12:47 ceiphas somepoortech: the log files are empty except for one other bug and simple "everything fine"-messages
12:48 somepoortech ceiphas: sorry, I'm no expert.. maybe someone else can give you more input
12:48 ceiphas somepoortech: i experience this bug in 3.4.2 don't know if it has something to do with it: https://bugzilla.redhat.com/show_bug.cgi?id=976750
12:48 glusterbot Bug 976750: low, medium, ---, vagarwal, CLOSED CURRENTRELEASE, Disabling NFS causes E level errors in nfs.log.
12:49 ceiphas no, glusterbot, it is NOT FIXED
13:02 ktosiek how bad should be glusters behaviour with big folders? I have 60k+ files in one, and doing ls on it... well, I stopped it after 30 seconds
13:04 kkeithley ceiphas: It's fixed in 3.5. That's what CURRENTRELEASE means. We asked people which bugs they wanted backported to 3.4.3; this wasn't one that anyone asked for. We're going to do a 3.4.4. I'll presume you'd like to see this backported.
13:05 ceiphas kkeithley: yes, really urgent, as our machines get theisr data from a gluster, and produce some things double...
13:06 ceiphas is 3.5 stable yet, and is it possible to hot upgrade?
13:08 kkeithley I'd say it's stable, but I'm not using it in production. ;-)  You should be able to upgrade to 3.5.0 the same way you'd upgrade, e.g., 3.4.2 to 3.4.3
13:09 ceiphas kkeithley: damn, have to wait for 3.5.1 as this bug is still in QA: https://bugzilla.redhat.com/show_bug.cgi?id=1074023
13:09 glusterbot Bug 1074023: high, unspecified, ---, ndevos, ON_QA , list dir with more than N files results in Input/output error
13:10 cvdyoung Is there documentation for setting up a samba export of the gluster volume?  I have my config setup, but when I try to view the volume, I am getting errors "vfs_init failed for service home".  I pasted my volume config for /etc/samba/smb.conf at:
13:10 cvdyoung http://fpaste.org/99855/68216139/
13:10 glusterbot Title: #99855 Fedora Project Pastebin (at fpaste.org)
13:11 lalatenduM_ @sambavfs
13:11 glusterbot lalatenduM_: http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
13:11 lalatenduM_ cvdyoung, check @sambavfs
13:12 John_HPC joined #gluster
13:13 Scott6 joined #gluster
13:13 John_HPC joined #gluster
13:14 davinder joined #gluster
13:14 japuzzo joined #gluster
13:16 bennyturns joined #gluster
13:17 rahulcs joined #gluster
13:18 cvdyoung ahh ok, it's not called samba-glusterfs?  Thank you!!!
13:22 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.com/show_bug.cgi?id=1075611>
13:23 smithyuk1_ Hi, sorry if it's been asked but 3.5.0 glusterd won't start with default config? Error says "Initialization of volume 'management' failed, review your volfile again"
13:27 smithyuk1_ Oh don't mind me, I've accidentally installed geo-replication
13:30 ktosiek ugh, I've tried to rsync data into one brick of replicated volume (to import it), and I'm now getting mismatching ino/dev between file /data/bywajtu-media/brick/public/dynamic/documents/RiNKeAqKgKPS.JPG (4937366/41) and handle [...]/brick/.glusterfs/2e/7a/2e7a77cd-d37d-4f31-aa28-4859b39c5a13 (4905942/41) (and similar) for some 12 files
13:32 ktosiek oh, and I have some files in heal-failed that are on none of the bricks in replica (and that I don't care about) - can I somehow purge them?
13:34 ktosiek hmm, looks like adding more RAM and restarting fixed the "mismatching ino/dev" problem, but I still have the "zombie heal-failed entries" one
13:35 lalatenduM_ ktosiek, are the entries for directories?
13:35 ktosiek yes
13:37 ktosiek wow, and there's 60 of them now, some exist and some not
13:37 lalatenduM_ ktosiek, it is a known issue, actually every thing healed for you, but some how the dir entries stays with the heal info command
13:37 lalatenduM_ ktosiek, there was a bug for it, but as of know I dont have it
13:37 ktosiek can I remove them somehow? It will bug me :-<
13:38 lalatenduM_ ktosiek, yes there was a workaround to clean the entries , let me think
13:41 lalatenduM_ ktosiek, make sure you only directory entries and you should not have any split-brain, then you can just restart the glusterd which will fix it
13:41 davinder joined #gluster
13:41 ktosiek cool, thanks :-)
13:42 lalatenduM_ ktosiek, np
13:43 ktosiek ugh, but the "mismatching ino/dev" is back
13:43 ktosiek what does it even mean to have one?
13:45 lalatenduM_ ktosiek, which version of glusterfs u r using?
13:45 ktosiek 3.4.3
13:47 sage joined #gluster
13:47 lalatenduM_ ktosiek, can you plz @pastebin the comple command and output?
13:47 mjsmith2 joined #gluster
13:47 lalatenduM_ @paste
13:47 glusterbot lalatenduM_: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
13:48 ktosiek lalatenduM_: what command?
13:48 davinder joined #gluster
13:49 lalatenduM_ ktosiek, I thought you are getting some error from heal info command , is the error is in brick logs only?
13:50 dbruhn joined #gluster
13:50 ktosiek no, it's a separate issue. And it's a Warning, but as it seems I get it for each file at least once, I tough it's important
13:50 ktosiek and yes, it's in brick logs
13:51 ktosiek ond only on one brick, so I think it's a collision between my initial data import and what gluster was expecting of metadata
13:51 ktosiek still, I'm curious as to what that warning means
13:56 ktosiek ok, it looks like it happens when hardlinking files. What is $BRICK/.glusterfs/?
13:56 ceiphas a folder in your brick
13:57 ktosiek right, but what is it for?
13:57 ceiphas meta data (like for healing)
13:58 ktosiek thanks, I think I know what I've done to that brick now...
13:58 primechuck joined #gluster
13:59 lalatenduM_ ktosiek, ceiphas $BRICK/.glusterfs/ has all gfid info for files, gfid acts as inode from GlusterFS point of view
13:59 ktosiek ohh
13:59 ceiphas explains your inode errors
13:59 ktosiek yup
14:01 ktosiek and healing failures are back, now I'm getting some 12 gfids in gluster volume heal VOL info and some other gfids in heal-failed
14:03 ceiphas lalatenduM_: do have an idea why files that are deleted from the volume via fuse mount reappear after about five minutes when the two peers that form the volume don't think they're split brained
14:04 lalatenduM_ ceiphas, which version of glusterfs u r using? it seems like a bug
14:05 ceiphas lalatenduM_: i have 3.4.2 with the patch for https://bugzilla.redhat.com/show_bug.cgi?id=1074023
14:05 glusterbot Bug 1074023: high, unspecified, ---, ndevos, ON_QA , list dir with more than N files results in Input/output error
14:08 Ark joined #gluster
14:09 Ark joined #gluster
14:09 ktosiek hmm, fun never ends it seems
14:10 ktosiek now I'm getting /var/log/glusterfs/etc-glusterfs-glusterd.vol.log spammed with 0-management: connection attempt failed (Connection refused)
14:10 ceiphas ktosiek: have that too
14:11 ktosiek and "Transport endpoint is not connected" on the client side
14:11 ceiphas ktosiek: not that
14:12 Chewi I've seen a repeated error in the logs while sitting at the gluster CLI console, might have been that one
14:13 Chewi ah it was cli.log.
14:13 ceiphas ktosiek: if you have nfs disabled (like me) it my come from a bug that is fixed in 3.5
14:13 lalatenduM_ ktosiek, we get "Transport endpoint is not connected" when brick process (i.e. glusterfsd) is not available for a brick, check gluster v status <volname>
14:13 ktosiek oh, ok. So that's probably unreleated
14:13 jbrooks left #gluster
14:13 jobewan joined #gluster
14:14 ktosiek lalatenduM_: statues says it's online
14:14 ktosiek but
14:14 jdarcy joined #gluster
14:14 ktosiek I did restart gluster
14:14 ktosiek shouldn't client reconnect?
14:14 lalatenduM_ ceiphas, I dont have ans to ur question, but will suggest you to send ur question to gluster-devel mailing list
14:15 jbrooks joined #gluster
14:15 jbrooks left #gluster
14:16 ktosiek ok, remounting on the client helped
14:16 lalatenduM_ ktosiek, yes, client should
14:16 ktosiek but ugh, really, why might it NOT reconnect?
14:16 cvdyoung Anyone set ACLs on their mounted gluster volumes?  I am trying to set default for one group, and deny access to another group, but am getting "operation not supported" errors
14:18 ceiphas i get the feeling, that gluster is not enterprise ready whan you need a patch to get a 32bit client to talk to the 64bit server and deleted files reappear without any log message
14:18 ktosiek ok, it looks like it hung after (that's from client's log) "background  entry self-heal failed on /public/dynamic/pictures"
14:18 mjsmith2 joined #gluster
14:18 kaptk2 joined #gluster
14:19 lalatenduM_ cvdyoung, you need to with "-o acl" if you are doing a glusterfs mount
14:20 lalatenduM_ ceiphas, FYI, I think RedHat only supports 64 bit clients and 64 bit server
14:20 lalatenduM_ for RedHat Storage Server
14:20 ceiphas and that is why we dont't your Redhat
14:21 ceiphas what did i just type?
14:21 wushudoin joined #gluster
14:21 ceiphas and that is why we don't use Redhat
14:21 lalatenduM_ ceiphas, :) got it
14:21 cvdyoung Thanks lalatenduM_ I'll try that
14:21 lalatenduM_ ceiphas, I meant there might me be issues with 32 bit thing
14:22 ceiphas but there are still clients and even servers that MUST use 32bit
14:22 ceiphas so bye gluster
14:22 lalatenduM_ ktosiek, JoeJulian is an expert in self-heal
14:22 ktosiek aaan hanging time
14:22 * jdarcy wonders how much it costs per platform to do a full QE cycle on something like GlusterFS.
14:22 ktosiek now the client hanged when doing "time find /mnt/media -exec stat '{}' ';' |wc -l" :-)
14:23 ktosiek I mean, I can't Ctrl-C that process
14:24 ceiphas jdarcy: i can understand it for ARM or PowerPC, but drop support for x86?
14:24 sprachgenerator joined #gluster
14:24 zaitcev joined #gluster
14:24 Chewi ceiphas: I'd say the opposite. ARM use is increasing. x86 use isn't.
14:25 ceiphas but x86 is still one of the most used arches
14:25 ndk joined #gluster
14:25 jdarcy ceiphas: Not so much dropped, as never added IIRC.  There apparently haven't been sufficient numbers of paying *enterprise* customers asking for it, so it's left to the community.
14:26 jdarcy Generally I think we've managed to stay 32-bit clean, but mixed 32- and 64-bit is even more of a challenge.
14:26 ceiphas it didn't need adding, but simple shoving 64bit data without check into the pipe is not a fine solution
14:26 jdarcy That said, it should fail in a more diagnosable way.
14:26 ceiphas like "IO error"?
14:27 jdarcy ceiphas: Yeah, like that.  ;)
14:27 jdarcy We do have version checking, maybe we should piggyback CPU-arch checking on that.
14:27 ceiphas i think i have to look for another clustered file system as gluster is not usable in our environment
14:28 jdarcy One possible workaround would be to run 32-bit VMs.
14:28 ceiphas jdarcy: i have 32bit vms, and a 64bit storage, and 32bit clients
14:29 micu joined #gluster
14:29 jdarcy ceiphas: What do you mean by 64-bit storage?  You mean the servers?
14:30 ceiphas i have 32bit and 64bit servers
14:30 tdasilva joined #gluster
14:30 jdarcy ceiphas: Can you try running 32-bit server VMs on the 64-bit servers?
14:30 ceiphas the 64bit is the main storage, the 32bit is our "main" server and the server that pushes most of the data into the volume
14:32 atinmu joined #gluster
14:33 kkeithley gluster community meeting in ~30 minutes in #gluster-meeting
14:33 scuttle_ joined #gluster
14:34 kkeithley Agenda - http://goo.gl/XDv6jf
14:34 glusterbot Title: glusterpad - Google Docs (at goo.gl)
14:40 plarsen joined #gluster
14:41 chirino joined #gluster
14:44 MrAbaddon joined #gluster
14:45 Guest6312 hello. I have 1TB backups going into a 6TB disturbed replicated volume with 4x 3TB bricks. Over the last 2 days the size of the second set of bricks is 90% and the first set is only 30%. I have kicked off a volume rebalance anything else I should do?
14:46 dbruhn Guest6312, what kind of backup? is it one large file? or a couple large files?
14:46 Guest6312 rdbms backups 700gb ibdata files
14:48 Guest6312 gluster 3.4.2 and we spoke before about big files and the hashing algorithm. Just this time I must have got unlucky 3 times (only have 3 days of backups)
14:48 wushudoin left #gluster
14:49 Guest6312 also should i care about mounting a volume with user_xattr. I received this E. http://fpaste.org/99892/74147139/
14:49 glusterbot Title: #99892 Fedora Project Pastebin (at fpaste.org)
14:51 sage joined #gluster
14:54 jbrooks joined #gluster
14:55 lalatenduM_ Guest6312, which filesystem u r using?
14:57 theron joined #gluster
14:58 dusmant joined #gluster
15:00 jag3773 joined #gluster
15:01 Guest6312 xfs
15:01 Guest6312 lalatenduM_:
15:02 lalatenduM_ Guest6312, xfs supports xattr by default, what was the command you tried?
15:03 MeatMuppet left #gluster
15:03 Guest6312 it was at 4am in the morning during a innodb backup to be honest I do not know. I will scroll back in the logs and check
15:05 Guest6312 lalatenduM_: it does not say what command was trying to run, just that the Operation is not supported
15:05 jbrooks joined #gluster
15:06 lalatenduM_ Guest6312, hmm, you got the error on the mount point?
15:06 ndk` joined #gluster
15:07 jbrooks left #gluster
15:07 jbrooks joined #gluster
15:09 codex joined #gluster
15:10 ndk`` joined #gluster
15:10 Guest6312 im slow today jet lag. I was looking in server logs, let me get on the client
15:11 chirino joined #gluster
15:17 kumar joined #gluster
15:19 lmickh joined #gluster
15:24 failshell joined #gluster
15:24 failshell joined #gluster
15:33 sks joined #gluster
15:35 jag3773 joined #gluster
15:37 kmai007 joined #gluster
15:38 hchiramm_ joined #gluster
15:38 chirino_m joined #gluster
15:39 MrAbaddon joined #gluster
15:41 premera joined #gluster
15:41 [o__o] joined #gluster
15:46 Matthaeus joined #gluster
15:47 kaptk2 joined #gluster
15:49 vpshastry joined #gluster
15:52 LoudNoises joined #gluster
16:10 purpleidea FYI: puppet-gluster 0.0.3 released! https://download.gluster.org/pub/gluster/purpleidea/puppet-gluster/
16:10 glusterbot Title: Index of /pub/gluster/purpleidea/puppet-gluster (at download.gluster.org)
16:10 purpleidea https://github.com/purpleidea/puppet-gluster/commit/36ee0ebd22c3ccd55a9544b96c0cbfcbf70b153b
16:10 glusterbot Title: Release 0.0.3 · 36ee0eb · purpleidea/puppet-gluster · GitHub (at github.com)
16:21 bennyturns joined #gluster
16:28 arya joined #gluster
16:28 jbd1 joined #gluster
16:31 kmai007 what is the difference between features.grace-timeout vs. network.ping-timeout ?
16:32 kanagaraj joined #gluster
16:35 zerick joined #gluster
16:37 Mo__ joined #gluster
16:43 MeatMuppet joined #gluster
16:46 zerick joined #gluster
16:51 sjusthome joined #gluster
16:57 vpshastry left #gluster
17:00 theron joined #gluster
17:00 ktosiek joined #gluster
17:01 jbd1 joined #gluster
17:04 hagarth joined #gluster
17:15 rjoseph joined #gluster
17:27 jcsp joined #gluster
17:29 Pavid7 joined #gluster
17:29 hchiramm_ joined #gluster
17:38 jmarley joined #gluster
17:38 jmarley joined #gluster
17:51 _dist joined #gluster
17:58 MacWinner joined #gluster
17:59 qdk joined #gluster
18:01 gmcwhist_ joined #gluster
18:10 MrAbaddon joined #gluster
18:21 vpshastry joined #gluster
18:22 vpshastry left #gluster
18:27 jag3773 joined #gluster
18:58 lalatenduM joined #gluster
18:58 edoceo which signal is used to gracefully stop my glusterfs processes on the server?
19:07 andreask joined #gluster
19:10 lpabon joined #gluster
19:11 _dist edoceo: do you mean so the other bricks are cool wit hit? From experience it depends on how it was packaged for each distro, but usually a service stop does it nice
19:11 edoceo yes, I want them to be cool
19:11 edoceo and the service stop thing doesn't do it - :(
19:11 edoceo the process still runs
19:12 ctria joined #gluster
19:12 _dist very uncool, so your brick is still seen online by other servers and clients?
19:15 chirino joined #gluster
19:16 theron joined #gluster
19:17 B21956 joined #gluster
19:17 edoceo yep :(
19:18 edoceo I did expect `/etc/init.d/glusterfs-server stop` to do the trick
19:18 _dist right, which distro are you running?
19:18 edoceo Let me also say, this one isn't a "stock" - it's an Ubuntu 12.04 + "we know what we're doing"
19:19 edoceo The name extension may not be accurate
19:19 edoceo I thought I could just kill -TERM and be OK? Or is something like INT or whatever preferred?
19:20 m0zes left #gluster
19:20 _dist educeo; I had that problem with ubuntu as well, I'm running debian and it's service stop works. Back when I had ubuntu yes I killed it by pid, but you'll end up having to wait for the timeout for clients to repath
19:20 _dist I think it's like 30 seconds by default
19:20 m0zes joined #gluster
19:21 diegows_ joined #gluster
19:21 edoceo Well, i was hoping my clients wouldn't have to have that issue
19:22 _dist I believe that unless it goes down properly, it doesn't notify the rest, perhaps someone else can pipe in here?
19:22 ndk joined #gluster
19:23 and` joined #gluster
19:24 and` hi! is anyone up for helping me troubleshooting a gluster issue?
19:25 _dist and`: you'd probably have the best luck if you just explain it :)
19:25 kmai007 joined #gluster
19:27 and` good catch :) in fact I configured two bricks (living on two different hosts, so one brick per host) on a replicated setup, added the volume and everything seems to work as expected at least from looking at the various 'gluster volume info volume-name' / 'gluster pool list' etc. commands
19:27 dbruhn joined #gluster
19:27 and` when trying to add files to one brick, the files aren't replicated at all on the other host
19:28 and` the issue is not related to iptables given all the above happens on a dedicated NIC which is whitelisted between the various machines
19:28 _dist and`: you have to add them through a client, not directly on the fs (if that's what you're doing)
19:28 and` ow
19:28 and` let me try, sec
19:29 _dist files written directly to the local fs of the gluster server don't get xattr stuff that the puts. So they don't get replicated etc (I'm assuming your volume is a replica volume?)
19:30 and` _dist, yup, the volume is a replica one
19:30 and` _dist, suppose I'm going to mount a client through nfs, should I setup an /etc/exports accordingly on one of the two bricks?
19:32 and` _dist, what wasn't clear at the beginning to me was how I could manage NFS acls between clients with glusterfs
19:33 and` i.e configuring a specific client to be ro when mounting it with nfs
19:34 edoceo _dist: more research shows that this is graceful: kill $(pidof glusterfsd)
19:34 edoceo which is SIGTERm
19:43 MeatMuppet1 joined #gluster
19:46 MeatMuppet joined #gluster
19:48 mshadle left #gluster
20:00 _dist edoceo: thanks for that, I'll keep it in mind if I run into a situation where I need to do it outisde of init.d script
20:01 _dist and~: is it clear now? It depends on what protocol you use etc, personally I use a single client to supply out via smb for file shares for example, other stuff for different reasons
20:03 and` _dist, ah so you are not aware of anything related to NFS itself
20:03 and` _dist, my original thought was modifying one of the two bricks fs directly would have replicated the data to the other
20:04 failshel_ joined #gluster
20:10 tdasilva left #gluster
20:11 jbrooks joined #gluster
20:20 Lookcrabs joined #gluster
20:21 _dist and~: I assume you're using NFS and not glusterfs client/
20:21 _dist ?
20:23 _dist both have their own permission settings for access by ip etc, but if you're talking about ACL that's another whole thing
20:24 lincolnb joined #gluster
20:37 MeatMuppet I have a question about extending a distributed replica.  I want to add new bricks and use 'rebalance fix-layout' so the new bricks get used without moving the data from the old bricks.  What I'd also like to do is prevent new files getting placed on the old bricks.  Is there a volume option I can set to do this?
20:50 and` _dist, yeah, I'm using NFS itself
20:50 sage joined #gluster
20:50 and` _dist, I did setup /etc/exports on one of the two bricks and I'm able to nfs mount it
20:51 and` _dist, but when I try to add a file through the nfs mount glusterd crashes+
20:52 and` _dist, http://fpaste.org/100004/49593313/raw/
20:53 _dist and`: gluster has it's own NFS server that is different from the standard linux one
20:53 _dist iirc there are nfs options you can set via volume set
20:54 _dist the glusterd and linux-nfs don't like each other (port battles etc)
20:57 and` _dist, umm... I guess this might be related "0-rpc-service: Could not register with portmap"
20:58 badone joined #gluster
20:58 _dist and`: personally I never use gluster directly with NFS. If I'm using an OS that _can't_ use the fuse client then I create a proxy inbetween that mounts via gluster and shares out through normal nfs
20:59 _dist I do this because I find more often I need to take my gluster servers down than I would that proxy, the proxy of course is just a VM in my case.
20:59 MrAbaddon joined #gluster
21:15 _dist well, I'm out :) night
21:26 sputnik1_ joined #gluster
21:26 sputnik1_ does anyone use gluster nfs as a backing store for vmware?
21:27 sputnik13net I'm wondering whether if the gluster node that the nfs client is talking to goes down, there's a "graceful" failover to another gluster node, given I'm using round-robin DNS with all of my gluster hosts' IPs in the DNS
21:27 sputnik13net and vmware is pointing to the DNS name rather than the IP
21:44 ira joined #gluster
21:44 qdk joined #gluster
21:46 chirino_m joined #gluster
21:52 badone joined #gluster
21:54 zerick joined #gluster
22:02 jbrooks sputnik13net: No, as far as I've seen, round robin doesn't suffice for failover
22:02 jbrooks I used CTDB and a virtual IP address as a mount point to do failover
22:07 jmarley joined #gluster
22:07 jmarley joined #gluster
22:12 nage joined #gluster
22:16 dbruhn joined #gluster
22:21 wiml joined #gluster
22:24 sputnik13net jbrooks: ctdb is for samba
22:24 glusterbot New news from newglusterbugs: [Bug 1095511] glusterd_xfer_cli_probe_resp() mixes errno and other error codes <https://bugzilla.redhat.com/show_bug.cgi?id=1095511>
22:24 sputnik13net no?
22:24 jbrooks sputnik13net: and nfs
22:25 jbrooks It definitely works
22:26 jbrooks It's what the RHS product uses: https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/ch09s05.html
22:26 glusterbot Title: 9.5. Configuring Automated IP Failover for NFS and SMB (at access.redhat.com)
22:37 mjsmith2 joined #gluster
22:40 MeatMuppet left #gluster
22:45 chirino joined #gluster
22:52 social joined #gluster
22:53 tryggvil joined #gluster
23:10 sjm joined #gluster
23:23 gdubreui joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary