Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 Matthaeus joined #gluster
00:04 jag3773 joined #gluster
00:05 JoeJulian Does anyone else here have large numbers of SAS drives?
00:12 JoeJulian To anybody using SAS drives, how fast do you accumulate "Non-medium error count" in your smart data?
00:16 plarsen joined #gluster
00:23 Matthaeus joined #gluster
00:37 gildub joined #gluster
00:42 edward1 left #gluster
00:43 chirino joined #gluster
01:12 gildub joined #gluster
01:25 jmarley joined #gluster
01:25 jmarley joined #gluster
01:30 jcsp joined #gluster
01:33 Ark joined #gluster
01:41 davinder6 joined #gluster
01:44 gildub joined #gluster
01:45 chirino joined #gluster
01:53 recidive joined #gluster
01:57 vimal joined #gluster
02:01 Matthaeus joined #gluster
02:11 sjm joined #gluster
02:23 gmcwhist_ joined #gluster
02:26 nishanth joined #gluster
02:34 bharata-rao joined #gluster
02:39 harish_ joined #gluster
02:40 rjoseph joined #gluster
02:47 harish_ joined #gluster
02:55 kkeithley1 joined #gluster
02:57 harish_ joined #gluster
03:08 vpshastry joined #gluster
03:16 chirino joined #gluster
03:30 rejy joined #gluster
03:42 kanagaraj joined #gluster
03:43 jcsp joined #gluster
03:46 StarBeast joined #gluster
03:49 JoeJulian @ppa
03:49 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
03:49 RameshN joined #gluster
03:53 glusterbot New news from newglusterbugs: [Bug 1104919] Fix memory leaks in gfid-access xlator. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1104919>
03:57 itisravi joined #gluster
03:58 shubhendu joined #gluster
04:08 bala joined #gluster
04:13 rjoseph left #gluster
04:14 saurabh joined #gluster
04:30 spandit joined #gluster
04:33 bennyturns joined #gluster
04:35 vikhyat joined #gluster
04:38 davinder6 joined #gluster
04:39 ndarshan joined #gluster
04:41 psharma joined #gluster
04:47 Guest26105 joined #gluster
04:48 ramteid joined #gluster
04:50 ppai joined #gluster
04:51 dusmant joined #gluster
04:53 glusterbot New news from newglusterbugs: [Bug 1104940] client segfault related to rebalance <https://bugzilla.redhat.co​m/show_bug.cgi?id=1104940>
04:53 kshlm joined #gluster
05:00 prasanthp joined #gluster
05:03 lalatenduM joined #gluster
05:03 gildub joined #gluster
05:10 meghanam joined #gluster
05:11 systemonkey joined #gluster
05:13 JoeJulian hagarth: Can you give bug 1104940 a glance in case that's another bug that's already fixed...
05:13 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1104940 unspecified, unspecified, ---, csaba, NEW , client segfault related to rebalance
05:14 JoeJulian ooh, he got away just in time...
05:14 jcsp joined #gluster
05:14 aravindavk joined #gluster
05:16 StarBeast joined #gluster
05:18 JoeJulian kkeithley, kkeithley_: Is there someone in Bangalore that can look at that crash? You're over there, right?
05:22 karnan joined #gluster
05:30 kanagaraj joined #gluster
05:33 kshlm joined #gluster
05:40 hagarth joined #gluster
05:41 JoeJulian hagarth: Oh good, you're back. Can you give bug 1104940 a glance in case that's another bug that's already fixed...
05:41 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1104940 unspecified, unspecified, ---, csaba, NEW , client segfault related to rebalance
05:45 kumar joined #gluster
05:49 recidive joined #gluster
05:50 nshaikh joined #gluster
05:51 aravindavk joined #gluster
05:53 samppah JoeJulian: hmmh, i think i have made similar bug report while ago
05:53 samppah let me see if I can find the id
05:54 dusmant joined #gluster
05:55 samppah https://bugzilla.redhat.co​m/show_bug.cgi?id=1022510
05:55 glusterbot Bug 1022510: unspecified, unspecified, ---, gluster-bugs, NEW , GlusterFS client crashes during add-brick and rebalance
05:56 samppah although it doesn't have more information
05:56 kkeithley_ JoeJulian: yes, I'm in Bangalore
05:58 hagarth JoeJulian: don't remember seeing a similar crash fixed.
05:58 StarBeast joined #gluster
05:58 JoeJulian crap
05:59 RameshN joined #gluster
05:59 hagarth JoeJulian: let me check nevertheless
06:02 deepakcs joined #gluster
06:05 lezo joined #gluster
06:08 [o__o] joined #gluster
06:12 lalatenduM ndevos, regarding https://bugzilla.redhat.co​m/show_bug.cgi?id=1100204, i am trying to reproduce it
06:12 glusterbot Bug 1100204: medium, high, ---, lmohanty, NEW , brick failure detection does not work for ext4 filesystems
06:19 vimal joined #gluster
06:21 pasqd hi
06:21 glusterbot pasqd: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:22 Ark joined #gluster
06:23 glusterbot New news from newglusterbugs: [Bug 1104959] Dist-geo-rep : some of the files not accessible on slave after the geo-rep sync from master to slave. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1104959>
06:29 sputnik13 joined #gluster
06:30 nishanth joined #gluster
06:31 jcsp joined #gluster
06:41 ctria joined #gluster
06:45 raghu joined #gluster
06:49 aravindavk joined #gluster
06:57 atinmu joined #gluster
07:00 ppai joined #gluster
07:00 coredumb Hello folks
07:01 coredumb i was wondering how one would set NFS shares access on glusterfs ?
07:01 coredumb if that's possible
07:01 lalatenduM coredumb, yes it is possible, you can nfs mount the gluster volumes
07:02 eseyman joined #gluster
07:03 lalatenduM coredumb, search "Using NFS to Mount Volumes" in https://github.com/gluster/glusterfs​/blob/master/doc/admin-guide/en-US/m​arkdown/admin_settingup_clients.md
07:03 glusterbot Title: glusterfs/doc/admin-guide/en-US/ma​rkdown/admin_settingup_clients.md at master · gluster/glusterfs · GitHub (at github.com)
07:04 coredumb lalatenduM: yes indeed i already use that, now i was wondering how to add some access control to the shares like i would with exportfs
07:06 lalatenduM coredumb, ACL might help you check https://github.com/gluster/glusterfs/blob/maste​r/doc/admin-guide/en-US/markdown/admin_ACLs.md  and search for nfs
07:06 glusterbot Title: glusterfs/doc/admin-guide/e​n-US/markdown/admin_ACLs.md at master · gluster/glusterfs · GitHub (at github.com)
07:13 dusmant joined #gluster
07:13 AaronGr joined #gluster
07:16 hagarth joined #gluster
07:17 nishanth joined #gluster
07:25 coredumb thx lalatenduM gonna check that
07:28 ngoswami joined #gluster
07:28 RameshN joined #gluster
07:40 fsimonce joined #gluster
07:42 mbukatov joined #gluster
07:47 GabrieleV joined #gluster
07:53 ricky-ti1 joined #gluster
07:55 ekuric joined #gluster
08:01 [o__o] joined #gluster
08:02 liquidat joined #gluster
08:02 keytab joined #gluster
08:11 Guest26105 joined #gluster
08:18 bala joined #gluster
08:19 ngoswami joined #gluster
08:38 edward1 joined #gluster
08:38 Slashman joined #gluster
08:43 kdhananjay joined #gluster
08:50 glusterbot New news from resolvedglusterbugs: [Bug 1104940] client segfault related to rebalance <https://bugzilla.redhat.co​m/show_bug.cgi?id=1104940>
08:53 chirino joined #gluster
08:54 glusterbot New news from newglusterbugs: [Bug 1022510] GlusterFS client crashes during add-brick and rebalance <https://bugzilla.redhat.co​m/show_bug.cgi?id=1022510>
08:54 JoeJulian lalatenduM: Can you help ensure that bug 1022510 gets triaged? That bug hasn't been looked at since October and it causes clients to crash every time a rebalance is run. This just caused us a major outage and I'm in desperate need to move files off of one brick.
08:54 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1022510 unspecified, unspecified, ---, gluster-bugs, NEW , GlusterFS client crashes during add-brick and rebalance
08:55 lalatenduM JoeJulian, sure, looking in to it
09:00 haomaiwang joined #gluster
09:04 haomaiw__ joined #gluster
09:22 hagarth joined #gluster
09:23 Chewi joined #gluster
09:27 nishanth joined #gluster
09:33 stickyboy joined #gluster
09:33 vpshastry joined #gluster
09:38 hybrid512 joined #gluster
09:39 hybrid512 joined #gluster
09:46 kshlm joined #gluster
09:53 harish_ joined #gluster
09:54 glusterbot New news from newglusterbugs: [Bug 1061229] glfs_fini leaks threads <https://bugzilla.redhat.co​m/show_bug.cgi?id=1061229>
09:59 jmarley joined #gluster
10:07 aravindavk joined #gluster
10:24 dusmant joined #gluster
10:25 harish joined #gluster
10:25 ndarshan joined #gluster
10:30 shubhendu joined #gluster
10:33 shyam joined #gluster
10:33 vpshastry joined #gluster
10:35 ProT-0-TypE joined #gluster
10:38 kshlm joined #gluster
10:38 doekia joined #gluster
10:38 doekia_ joined #gluster
10:41 haomaiwa_ joined #gluster
10:44 ndarshan joined #gluster
10:47 haomai___ joined #gluster
10:50 gildub joined #gluster
10:53 atinmu joined #gluster
10:54 glusterbot New news from newglusterbugs: [Bug 1105083] Dist-geo-rep : geo-rep failed to sync some files, with error "(Operation not permitted)" on slaves. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105083>
10:57 Chewi speaking of geo-rep, anyone seen Venky lately?
10:58 hagarth Chewi: overclk_ is right here :)
10:58 Chewi aha I can't remember his nick, thanks
10:59 Chewi overclk_: if you're there, would appreciate a reply to http://supercolony.gluster.org/piperma​il/gluster-users/2014-May/040431.html. it's been a bit of a showstopper.
11:00 dokia joined #gluster
11:03 edward1 joined #gluster
11:12 nishanth joined #gluster
11:24 Ark joined #gluster
11:25 kshlm joined #gluster
11:25 Thilam hello, my glusterfs volume is totaly broken, I really need help
11:26 Thilam I was making a transfert and a brick goes down (down't know why)
11:26 Thilam I tried to reboot glusterd on the server hosting this brick and all goes down
11:27 Thilam in every servers log files I have this [2014-06-05 11:21:33.838208] E [glusterd-store.c:1979:glus​terd_store_retrieve_volume] 0-: Unknown key: brick-0
11:27 Thilam [2014-06-05 11:21:33.838242] E [glusterd-store.c:1979:glus​terd_store_retrieve_volume] 0-: Unknown key: brick-1
11:27 Thilam [2014-06-05 11:21:33.838258] E [glusterd-store.c:1979:glus​terd_store_retrieve_volume] 0-: Unknown key: brick-2
11:27 Thilam it seems config is lost
11:27 Thilam pool list cmd take time to respond
11:27 Thilam and answer is null
11:28 Thilam I'm really lost
11:28 Thilam or
11:28 Thilam [2014-06-05 11:21:26.823316] E [socket.c:2161:socket_connect_finish] 0-management: connection to 192.168.1.74:24007 failed (Connection refused)
11:28 Thilam [2014-06-05 11:21:26.823375] E [socket.c:2161:socket_connect_finish] 0-management: connection to 192.168.1.73:24007 failed (Connection refused)
11:28 Thilam but firewall is down
11:29 Thilam (version 3.5)
11:29 dusmant joined #gluster
11:29 Ark e
11:32 bfoster joined #gluster
11:39 jcsp_ joined #gluster
11:48 vpshastry joined #gluster
11:50 nshaikh joined #gluster
11:56 atinmu joined #gluster
12:00 _Bryan_ joined #gluster
12:00 marbu joined #gluster
12:06 itisravi joined #gluster
12:11 guiovanny joined #gluster
12:12 jag3773 joined #gluster
12:19 vdrandom joined #gluster
12:20 vdrandom hey, is it possible to recreate missing gfid?
12:23 ekuric joined #gluster
12:33 Ark joined #gluster
12:46 plarsen joined #gluster
12:47 plarsen joined #gluster
12:51 andreask joined #gluster
12:54 sjm joined #gluster
12:55 glusterbot New news from newglusterbugs: [Bug 1105123] NEW - smb process starts, when user.smb option is set to disable. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105123>
12:55 hagarth joined #gluster
12:55 chirino joined #gluster
12:57 japuzzo joined #gluster
13:00 shyam joined #gluster
13:00 bennyturns joined #gluster
13:00 rwheeler joined #gluster
13:02 Alex joined #gluster
13:03 Alex Hm, I may have asked this before but have forgotten quite spectacularly, in a two peer situation, is there a way to make a given peer prefer accessing files on its own bricks? (assuming distribute-replicate, 6 bricks on each peer)
13:03 DV joined #gluster
13:03 DV__ joined #gluster
13:12 sroy joined #gluster
13:13 karnan joined #gluster
13:14 plarsen joined #gluster
13:15 jmarley joined #gluster
13:15 jmarley joined #gluster
13:22 dusmant joined #gluster
13:23 pdrakeweb joined #gluster
13:25 chirino joined #gluster
13:25 rgustafs joined #gluster
13:25 yosafbridge joined #gluster
13:29 mbukatov joined #gluster
13:33 hagarth joined #gluster
13:35 Thilam plz I'm still stuck with my problem
13:35 Thilam I've this in cli.log
13:35 Thilam [2014-06-05 13:18:54.844611] W [dict.c:1055:data_to_str] (-->/usr/lib/x86_64-linux-gnu/glusterfs​/3.5.0/rpc-transport/socket.so(+0x4e24) [0x7f53cb6cae24] (-->/usr/lib/x86_64-linux-gnu/gluster​fs/3.5.0/rpc-transport/socket.so(sock​et_client_get_remote_sockaddr+0x4e) [0x7f53cb6d188e] (-->/usr/lib/x86_64-linux-gnu/glust​erfs/3.5.0/rpc-transport/socket.so(​client_fill_address_family+0x202) [0x7f53cb6d1572])))
13:35 Thilam 0-dict: data is NULL
13:35 Thilam [2014-06-05 13:18:54.844654] W [dict.c:1055:data_to_str] (-->/usr/lib/x86_64-linux-gnu/glusterfs​/3.5.0/rpc-transport/socket.so(+0x4e24) [0x7f53cb6cae24] (-->/usr/lib/x86_64-linux-gnu/gluster​fs/3.5.0/rpc-transport/socket.so(sock​et_client_get_remote_sockaddr+0x4e) [0x7f53cb6d188e] (-->/usr/lib/x86_64-linux-gnu/glust​erfs/3.5.0/rpc-transport/socket.so(​client_fill_address_family+0x20d) [0x7f53cb6d157d])))
13:35 Thilam 0-dict: data is NULL
13:35 Thilam [2014-06-05 13:18:54.844669] E [name.c:147:client_fill_address_family] 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
13:36 Thilam and gluster volume info freeze on the server
13:36 Thilam I really don't know what is going on
13:41 hagarth Thilam: are all commands not responsive?
13:41 jobewan joined #gluster
13:42 Thilam on one node yes
13:42 Thilam on the other node pool list is ok
13:42 Thilam volume info also
13:42 Thilam volume status stuck
13:43 Thilam I've added this option :  option transport.address-family inet
13:43 Thilam in glusterd.vol for volume management
13:43 Thilam but it did not help :/
13:44 jag3773 joined #gluster
13:45 cvdyoung joined #gluster
13:46 hagarth Thilam: can you restart glusterd on the node where it is unresponsive?
13:46 harish joined #gluster
13:46 Thilam yes
13:46 Thilam then it works on tyhe server i've lastely restart
13:46 Thilam and an other one stop responding
13:47 Thilam glusterd.vol is the same file on every servers
13:47 Thilam volume management
13:47 Thilam type mgmt/glusterd
13:47 Thilam option working-directory /var/lib/glusterd
13:47 Thilam option transport-type socket,rdma
13:47 Thilam option transport.address-family inet
13:47 hagarth Thilam: would it be possible to gdb into glusterd on the server that is not responding and get a backtrace?
13:47 Thilam option transport.socket.keepalive-time 10
13:47 Thilam option transport.socket.keepalive-interval 2
13:47 Thilam option transport.socket.read-fail-log off
13:47 Thilam end-volume
13:47 Thilam which is default except the option I've added about transport-type
13:49 Thilam gdb ?
13:49 Thilam how can I do this stuff ?
13:50 Thilam you want me to launch glusterfsd with debug option ?
13:51 hagarth Thilam: gdb `which glusterd` `pidof glusterd`
13:51 hagarth Thilam: and then "thread apply all bt" at the gdb prompt
13:51 Thilam which glusterd
13:51 hagarth once you collect the output of that, you can quit gdb
13:52 Thilam ok
13:52 hagarth once done, please fpaste the output.
13:53 davinder6 joined #gluster
13:55 Thilam http://fpaste.org/107419/76502140/
13:55 glusterbot Title: #107419 Fedora Project Pastebin (at fpaste.org)
13:55 glusterbot New news from newglusterbugs: [Bug 1105147] Setting either of user.cifs or user.smb option to enable leads to enabling of smb shares. Enable only when none are disable <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105147>
13:56 Thilam hagarth is it talking to you ?
13:57 Thilam at this time, none of my three servers respond
13:57 Thilam I was putting this into prod, what a mess :/
13:57 hagarth Thilam: checking
13:58 hagarth Thilam: I think it would be better to wait for 3.5.1 to get into prod. It is just around the corner and there are a number of good fixes in 3.5.1.
13:59 Thilam k
14:00 Thilam when you say 'it is just around the corner', it's days? weeks?. You have an idea?
14:03 hagarth a few days is what we are looking at. ndevos - would that be right?
14:04 Thilam ok, so I can wait
14:04 Thilam it's very strange, all was going fine
14:04 Thilam I've launch a transfer on 100GB
14:05 Thilam launched
14:05 Thilam and then a brick goes offline
14:05 ndevos yes, 3.5.1 in a few days (somewhere next week), beta2 is planned for tomorrow or early next week
14:05 Thilam I tried to reboot gluster service on the server which was hosting this brick
14:05 Thilam and the mess began
14:07 ndevos Thilam: if I see the messages you posted about 3 hours ago, I'd think that your /var filled up to 100% (maybe the brick was not on a mounted filesystem?) and that is what caused the issue?
14:10 Thilam there is no partition full on any servers
14:10 Thilam i've just check
14:11 ndevos hmm, okay, but that would be one of the common causes for glusterd failing to start (configuration files are missing or 0 size)
14:11 qdk_ joined #gluster
14:12 Thilam I thought the configuration was lost because it didn't respond but was not that
14:12 wushudoin joined #gluster
14:12 Thilam config files are well located
14:12 Thilam it seems to be a communication issue
14:13 gmcwhist_ joined #gluster
14:13 ctria joined #gluster
14:17 ndevos Thilam: one of the devs thinks it could be bug 1095585
14:17 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1095585 urgent, urgent, ---, kaushal, MODIFIED , Two instances each, of brick processes, glusterfs-nfs and quotad seen after glusterd restart
14:18 haomaiwang joined #gluster
14:18 Thilam I have quota activated
14:19 Thilam is there a solution to relaunch the volume ?
14:19 Thilam properly
14:20 diegows joined #gluster
14:20 Thilam the symptom matches
14:20 Thilam and it is exactly what it appens
14:20 Thilam 1) Create a distributed volume.
14:20 Thilam 2) Enable quota on the volume by running the command "gluster vol quota <volname> enable.
14:20 Thilam 3) Now stop glusterd and start it again.
14:22 ndevos Thilam: I guess you'd like that bug fixed in 3.5.1, right?
14:22 Thilam http://fpaste.org/107430/01978129/
14:22 glusterbot Title: #107430 Fedora Project Pastebin (at fpaste.org)
14:22 dberry joined #gluster
14:23 gmcwhist_ joined #gluster
14:23 Thilam yes, otherwise I will not be able to use it in prod
14:24 haomaiwang joined #gluster
14:24 Thilam but I suppose it appens in specfic cases, because many of glusterfs user may rush distributed volumes with quota no ?
14:24 JoeJulian vdrandom: Anything you do to a file with a missing gfid through a client should heal that file creating the gfid. Just "stat $file_with_missing_gfid" from the client.
14:24 primeministerp joined #gluster
14:26 vdrandom JoeJulian, thing is, I have no idea what file it is, I only have the gfid
14:26 vdrandom I mean, which
14:28 JoeJulian then how do you know a file has a missing gfid?
14:28 ndevos Thilam: bug 1105188 got filed for that now, and I've got a confirmation from a dev that he'll do the backport tonight/tomorrow
14:28 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1105188 urgent, urgent, ---, kaushal, ASSIGNED , Two instances each, of brick processes, glusterfs-nfs and quotad seen after glusterd restart
14:28 mortuar joined #gluster
14:29 * JoeJulian needs a bug to be urgent too...
14:30 Thilam it means debian package for release 3.5 will be updated tomorrow?
14:33 ramteid joined #gluster
14:33 JoeJulian Oh, mine is urgent. Nice!
14:36 JoeJulian Alex: cluster.choose-local is the option you were asking about, and it's true by default.
14:36 zerick joined #gluster
14:37 Alex JoeJulian: Hm, thanks. So, if true, there's not much else we can do to try to make it 'better', I guess
14:38 JoeJulian deadline scheduler, more spindles, less latency between replicas
14:39 Alex er. I'm... I'm failing. gluster volume set <vol> <key> <val>
14:39 Alex ...how..to..get
14:39 JoeJulian There is no get. "gluster volume set help" to find the defaults.
14:40 JoeJulian anything that's not default from gluster volume info
14:40 Alex cool, so if it's not explicitly set/showing under 'info', then it should just be the default
14:40 Alex :)
14:40 haomaiw__ joined #gluster
14:41 Chewi left #gluster
14:41 ndevos Thilam: not sure when the debian packages will be available, the packagers will likely wait until the 3.5.1 release is out of beta (hopefully next week) and provide packages after that
14:42 jag3773 joined #gluster
14:44 deepakcs joined #gluster
14:49 vdrandom JoeJulian, I know it because nfs.log is full of this specific message complaining about missing gfid
14:50 haomaiwa_ joined #gluster
14:52 JoeJulian vdrandom: fpaste some of that log
14:53 vdrandom [2014-06-01 00:03:17.459424] E [nfs3-helpers.c:3595:nfs3_f​h_resolve_inode_lookup_cbk] 0-nfs-nfsv3: Lookup failed: <gfid:4efb665f-38fc-462a-a7da-50fdfaa64c48>: Invalid argument
14:54 vdrandom or even that, three lines http://pastebin.com/LwaChiBB
14:54 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:54 vdrandom :<
14:56 the-me joined #gluster
14:57 jcsp joined #gluster
14:58 primechuck joined #gluster
15:02 shubhendu joined #gluster
15:03 ekuric joined #gluster
15:03 jag3773 joined #gluster
15:10 coredump joined #gluster
15:15 shyam joined #gluster
15:16 LoudNoises joined #gluster
15:17 haomaiwa_ joined #gluster
15:17 elico I was wondering about write-back options with glusterfs, is it possible to use it?
15:24 aravindavk joined #gluster
15:27 JoeJulian vdrandom: OH. You said the gfid was missing. I read that as the metadata was missing from the file. From what I can find I would suggest remounting.
15:28 chirino joined #gluster
15:28 Thilam ndevos: regarding my problem, do you have a solution to get my cluster working again? deleting quota? ...
15:29 Thilam just have something working until 3.5.1 released
15:29 ndevos Thilam: disabling quota should do it
15:29 Thilam ok, can I do it in config files?
15:29 Thilam directly ?
15:29 ndevos Thilam: yeah, I think so, just have never tried that before
15:30 Thilam stopping all gluster instance, modify /var... volume config file on each server
15:30 Thilam this procedure looks good ?
15:30 JoeJulian You'll probably have to do it in the volume's info and $volname-fuse.vol file
15:30 ndevos yeah, that is basically it - make a backup of the config 1st :)
15:30 Thilam okt thx for your help guys
15:31 vdrandom JoeJulian, thanks
15:31 vdrandom will try tomorrow, when the nfs mount is not used by the server
15:32 haomaiw__ joined #gluster
15:33 ctria joined #gluster
15:35 Thilam for your info, there is nothing about quota in $volname-fuse.vol file
15:35 Thilam just in volume info and 3 .vol file relating to each brick
15:36 recidive joined #gluster
15:36 JoeJulian thanks, I'll remember that for the next time.
15:41 Thilam hagarth / ndevos : for your info, this procedure works and my volume is now on again
15:42 _dist joined #gluster
15:43 ndevos Thilam: thanks for confirming!
15:43 Thilam it's the less I can do :)
15:44 kshlm joined #gluster
15:44 ndevos Thilam: if you are interested in following the progress, add yourself to the CC list on https://bugzilla.redhat.com/1105188
15:44 glusterbot Title: Bug 1105188 Two instances each, of brick processes, glusterfs-nfs and quotad seen after glusterd restart (at bugzilla.redhat.com)
15:45 ndevos that will get you email notifications on when the patch has been backported, a new beta is available, and the bug gets closed on release
15:47 Thilam ok thank you
15:47 Thilam I'll do this
15:47 Alex left #gluster
15:48 Thilam btw I on IRC since 2 good weeks I'll stay there too :)
15:48 vpshastry joined #gluster
15:52 bala joined #gluster
15:52 ndevos Thilam: welcome to the club :)
15:53 firemanxbr joined #gluster
16:02 JoeJulian Thilam: Glad to have you
16:07 navid__ joined #gluster
16:10 sputnik13 joined #gluster
16:13 aravindavk joined #gluster
16:18 sputnik13 joined #gluster
16:23 sprachgenerator joined #gluster
16:27 jbd1 joined #gluster
16:27 bene2 joined #gluster
16:29 gmcwhist_ joined #gluster
16:31 sprachgenerator_ joined #gluster
16:37 _dist joined #gluster
16:39 sprachgenerator joined #gluster
16:40 jag3773 joined #gluster
16:43 sputnik13 joined #gluster
16:54 fsimonce joined #gluster
16:58 chirino joined #gluster
16:59 mjsmith2 joined #gluster
17:03 sjusthome joined #gluster
17:09 ndk joined #gluster
17:22 kanagaraj joined #gluster
17:23 theron joined #gluster
17:24 shyam1 joined #gluster
17:32 pdrakeweb joined #gluster
17:32 vpshastry joined #gluster
17:34 systemonkey joined #gluster
17:38 andreask joined #gluster
17:42 shyam joined #gluster
17:55 sputnik13 joined #gluster
17:58 _dist joined #gluster
18:00 recidive joined #gluster
18:02 ramteid joined #gluster
18:31 theron joined #gluster
18:31 brad_mssw|work joined #gluster
18:34 brad_mssw joined #gluster
18:39 brad_mssw if a gluster node goes down and comes back up, then a new connection comes into the node ... is it guaranteed to return non-stale data?  I see a note stating self-healing only runs every 10 minutes
18:41 lpabon joined #gluster
18:41 brad_mssw i plan on having 3 gluster servers that are also ovirt nodes
18:41 _dist brad: the unhealthy gluster node will just redirect the client to the healthy one for the stale stuff
18:41 brad_mssw _dist: ok, good to know, thanks
18:41 _dist I've tested this thoroughly myself. This is always the case if you are using libgfapi or fuse, but not NFS
18:42 brad_mssw _dist: ok, I was going to ask about NFS next, since I'll be running the ovirt hosted engine via nfs
18:42 brad_mssw _dist: does that mean I should use something like 'keepalived' with a virtual ip for NFS access?
18:42 _dist so you're ovirt is going to be a vm?
18:43 _dist your*
18:43 brad_mssw yes, the ovirt engine itself will be a vm
18:43 brad_mssw (new to ovirt 3.4)
18:43 _dist controlling a cent or rhel?
18:43 brad_mssw centos 6.5
18:43 _dist cause you want headless no hassle non-fedora ovrit yes ? :)
18:44 brad_mssw right, I'd rather stay away from fedora
18:44 _dist you want to run it in a vm so you can migrate it etc?
18:44 brad_mssw yep, want high availability for the engine
18:45 _dist if I were you I'd run it over a separate volume setup for fuse using gluster instead of NFS
18:45 _dist but you could do VIP, it's just good HA over NFS is tricky
18:46 brad_mssw hmm, ovirt doesn't currently allow you to run posixfs or glusterfs for the hosted engine ..... it wants only an nfs3 or nfs4 server
18:47 _dist how would it know?
18:47 brad_mssw I could in thoery set up a local nfs server backed by a gluster volume on each node it could migrate to and use 'localhost', but I'm not convinced that is any better
18:47 brad_mssw it takes an actual url, not a file path and does the actual nfs mount on your behalf, so you're not actually specifying a filesystem path
18:48 _dist right, I'd just trick it, run it in your cent hypervisor, store its' image in a gluster fuse mounted path spread across your hypervisors
18:48 _dist then you can live migrate it, etc. But it probably won't be "cool" with that in the sense of migrating itself etc
18:49 _dist I'm not too familiar with the new HA features of oVirt, last I tried it I was only mildly happy with it. It'll be awesome to see how your install goes
18:50 brad_mssw right, ovirt wants to manage the availability of the engine itself in 3.4 and it will auto-start it if none of the nodes have the engine running, etc
18:50 brad_mssw well, I guess another question ... whats an easy way to tell if a node is 'stale' or not?
18:50 brad_mssw from the command line
18:51 _dist well the whole node itself won't be stale
18:51 _dist just files on it
18:52 _dist "gluster volume heal volname info" will list the bricks and files that are out of sync
18:52 brad_mssw right, think I'm going to try the keepalived route just to see if it bricks itself
18:52 brad_mssw but I'd like a healthcheck of the current node to see if it is eligible to take the vip
18:53 _dist yeah, that'll be tricky. I guess the one with the longest uptime will typically be the "healthiest" brick but that's iffy, given other FS stuff can go wrong too
18:53 _dist are you using this for qemu-kvm, xen or ?
18:54 brad_mssw qemu-kvm
18:56 glusterbot New news from newglusterbugs: [Bug 1105277] Failure to execute gverify.sh. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105277> || [Bug 1105283] Failure to start geo-replication. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105283>
18:58 brad_mssw so I know I need at least 3 gluster nodes for proper quorum support ...
18:58 brad_mssw but I can't use 'replica 2' with 3 nodes though right?
18:58 _dist ok, so I was trying to resist ending up google search via gluster's public logs :) but if you're using ovirt because you want libgfapi (which explains your fedora use) I'd recommend proxmox (which I'm using)
18:59 brad_mssw ovirt via libgfapi is completely broken at the moment
18:59 brad_mssw it would be great, but not going to happen for a few months from what I understand due to snapshot issues
19:24 JoeJulian You can use replica 2 with three servers.
19:24 JoeJulian Just have an even number of bricks and iterate.
19:28 rotbeard joined #gluster
19:31 ekuric joined #gluster
19:33 _dist JoeJulian: let me know any time you're ready to do testing with that heal issue. I'm currently migrating my replication volume over to a zfs backend that uses non-file based xattrs (something I only found out recently)
19:33 _dist I'm not pushing, I'm just saying whenever you need resources on it, I'm ready.
19:34 JoeJulian _dist: It's second on my list. I have a vast array of resources available to me now. First is a rebalance issue that crashes all my clients.
19:49 lpabon joined #gluster
19:54 klaas joined #gluster
20:05 klaas joined #gluster
20:13 andreask joined #gluster
20:23 sputnik13 joined #gluster
20:27 karimb joined #gluster
20:28 sroy_ joined #gluster
20:28 karimb hi guys, what does the following setting do ? gluster volume set <volname> server.manage-gids on
20:32 n0de http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
20:32 n0de karimb: ^^ for you
20:39 pdrakeweb joined #gluster
21:13 edward1 joined #gluster
21:17 mjsmith2 joined #gluster
21:34 mtanner_ joined #gluster
21:41 gmcwhist_ joined #gluster
21:48 sjm left #gluster
21:52 sprachgenerator joined #gluster
22:02 gmcwhist_ joined #gluster
22:05 borreman_dk joined #gluster
22:33 _dist anyone played with tcp window size on volumes?
22:34 pdrakeweb joined #gluster
22:38 sprachgenerator joined #gluster
22:46 Ark joined #gluster
23:00 sputnik13 joined #gluster
23:05 joostini joined #gluster
23:10 iktinos joined #gluster
23:11 gildub joined #gluster
23:15 mjsmith2 joined #gluster
23:19 mkzero joined #gluster
23:34 sputnik13 joined #gluster
23:47 pdrakeweb joined #gluster
23:47 sprachgenerator joined #gluster
23:56 sjm joined #gluster
23:59 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary