Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 tokik joined #gluster
00:18 gdubreui joined #gluster
00:28 andreask joined #gluster
00:30 YazzY joined #gluster
00:30 YazzY joined #gluster
00:34 jtux joined #gluster
00:37 JoeJulian file a bug
00:37 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
00:44 dewey joined #gluster
00:49 dewey_ joined #gluster
00:52 elyograg interesting.  accessing .glusterfs through a fuse mount results in 'operation not permitted' - rather than just pretending it doesn't exist.
00:53 JoeJulian Makes sense. Otherwise the os would let you create it.
00:53 elyograg did not think of that.
00:53 JoeJulian I'm glad they did. :D
00:53 elyograg I'm writing a script to gather info about my heal info gfids.
00:54 elyograg it takes what it finds at the end of a readlink and stats it via the mount point.  when it's a gluster link file, that doesn't work so well. ;)
00:56 elyograg if stat says it has more than one hardlink, it searches the brick for the inum.  that part is slow.
00:56 vpshastry joined #gluster
00:57 tdasilva left #gluster
00:57 JoeJulian Once cached, the inode lookups should be quicker.
00:58 cp0k_ joined #gluster
01:00 JoeJulian If it's a DHT linkfile, just remove it and its gfid counterpart. They'll get recreated on the first dht miss.
01:00 elyograg yep.  I worked that part out.  It's the directories that are giving me fits.
01:02 JoeJulian I was looking for an option... I thought I remembered one where you could force afr to use one subvolume to heal from.
01:04 dewey_ joined #gluster
01:07 elyograg gluster-users is a mailing list that does not set Reply-To to the list.  This always trips me up.
01:11 vpshastry joined #gluster
01:14 jporterfield joined #gluster
01:16 elyograg JoeJulian: if you find something, I will read the backlog here, or the mailing list works too.
01:16 JoeJulian ok
01:25 Philambdo joined #gluster
01:28 harish_ joined #gluster
01:32 doekia joined #gluster
01:45 vpshastry joined #gluster
02:07 dewey joined #gluster
02:21 haomaiwang joined #gluster
02:26 Matthaeus joined #gluster
02:32 jporterfield joined #gluster
02:34 DV joined #gluster
02:55 tokik joined #gluster
03:03 chirino joined #gluster
03:11 bharata-rao joined #gluster
03:21 harish joined #gluster
03:25 chirino_m joined #gluster
03:33 Yuan__ joined #gluster
03:33 Yuan__ left #gluster
03:35 itisravi joined #gluster
03:35 Yuan__ joined #gluster
03:44 davinder joined #gluster
03:46 latha joined #gluster
03:58 ndarshan joined #gluster
03:59 chirino joined #gluster
03:59 saurabh joined #gluster
04:00 shubhendu joined #gluster
04:09 atrius` joined #gluster
04:09 YazzY joined #gluster
04:21 CheRi joined #gluster
04:26 chirino_m joined #gluster
04:29 kshlm joined #gluster
04:30 rjoseph joined #gluster
04:35 kaushal_ joined #gluster
04:37 haomaiwang joined #gluster
04:41 deepakcs joined #gluster
04:41 hagarth joined #gluster
04:45 ppai joined #gluster
04:46 bala joined #gluster
04:52 kdhananjay joined #gluster
04:55 DV joined #gluster
04:57 snehalphule_ joined #gluster
04:59 mattapperson joined #gluster
05:03 satheesh joined #gluster
05:15 lalatenduM joined #gluster
05:23 vpshastry joined #gluster
05:23 mohankumar__ joined #gluster
05:25 cjanbanan joined #gluster
05:28 chirino joined #gluster
05:28 aravindavk joined #gluster
05:37 nshaikh joined #gluster
05:40 ajha joined #gluster
05:46 raghu joined #gluster
05:51 vimal joined #gluster
05:53 jporterfield joined #gluster
06:02 rahulcs joined #gluster
06:08 Yuan__ joined #gluster
06:14 benjamin_____ joined #gluster
06:15 shylesh joined #gluster
06:21 saurabh joined #gluster
06:24 ajha joined #gluster
06:25 rahulcs joined #gluster
06:35 toki joined #gluster
06:39 rahulcs joined #gluster
06:39 aravindavk joined #gluster
06:52 ricky-ti1 joined #gluster
06:54 ajha joined #gluster
07:00 aravindavk joined #gluster
07:01 kevein joined #gluster
07:02 ppai joined #gluster
07:02 ProT-0-TypE joined #gluster
07:06 davinder joined #gluster
07:10 Philambdo joined #gluster
07:22 rahulcs joined #gluster
07:27 rossi_ joined #gluster
07:45 ctria joined #gluster
07:46 jporterfield joined #gluster
07:50 irctc720 joined #gluster
07:53 rwheeler joined #gluster
07:56 irctc720 joined #gluster
07:58 satheesh joined #gluster
07:59 irctc720_ joined #gluster
08:10 atrius_ joined #gluster
08:11 LinhHoang joined #gluster
08:11 para joined #gluster
08:11 para hello
08:11 glusterbot para: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:12 para i have a question about gluster fs
08:12 junaid joined #gluster
08:17 fidevo joined #gluster
08:18 harish joined #gluster
08:23 velladecin joined #gluster
08:24 Yaz joined #gluster
08:24 ProT-0-TypE joined #gluster
08:25 vpshastry joined #gluster
08:33 snehal joined #gluster
08:34 rahulcs joined #gluster
08:37 al joined #gluster
08:37 rahulcs joined #gluster
08:45 rfortier1 joined #gluster
08:48 andreask joined #gluster
08:49 rossi_ joined #gluster
09:00 chirino_m joined #gluster
09:01 haomaiwang joined #gluster
09:01 hagarth joined #gluster
09:01 23LAAJFGD joined #gluster
09:01 17SAAODR4 joined #gluster
09:01 marcoceppi joined #gluster
09:01 fsimonce joined #gluster
09:01 morse joined #gluster
09:01 qdk joined #gluster
09:01 Kins joined #gluster
09:01 Peanut joined #gluster
09:01 cfeller joined #gluster
09:01 keytab joined #gluster
09:01 ninkotech_ joined #gluster
09:01 primechuck joined #gluster
09:01 Joe630 joined #gluster
09:01 shubhendu joined #gluster
09:01 YazzY joined #gluster
09:01 deepakcs joined #gluster
09:01 toki joined #gluster
09:01 ngoswami joined #gluster
09:01 kris joined #gluster
09:01 hagarth1 joined #gluster
09:02 rgustafs joined #gluster
09:03 ctria joined #gluster
09:04 nshaikh joined #gluster
09:05 liquidat joined #gluster
09:07 CheRi joined #gluster
09:08 CheRi joined #gluster
09:08 irctc720_ joined #gluster
09:12 benjamin_____ joined #gluster
09:18 rahulcs joined #gluster
09:19 TvL2386 joined #gluster
09:22 rahulcs joined #gluster
09:25 psharma joined #gluster
09:25 cjanbanan joined #gluster
09:29 ndarshan joined #gluster
09:33 denthrax joined #gluster
09:38 bala joined #gluster
09:46 hagarth joined #gluster
09:48 prasanth joined #gluster
09:49 Pavid7 joined #gluster
09:53 ndarshan joined #gluster
09:56 rahulcs joined #gluster
09:58 shubhendu joined #gluster
10:01 saurabh joined #gluster
10:01 tokik joined #gluster
10:01 glusterbot New news from newglusterbugs: [Bug 1072854] Volume mount/unmount using libgfapi has memory leaks <https://bugzilla.redhat.co​m/show_bug.cgi?id=1072854>
10:05 lalatenduM joined #gluster
10:06 toki joined #gluster
10:08 vpshastry1 joined #gluster
10:13 cedric___ joined #gluster
10:16 diegows joined #gluster
10:17 gmcwhistler joined #gluster
10:20 ctria joined #gluster
10:26 ndarshan joined #gluster
10:27 HeisSpiter joined #gluster
10:28 HeisSpiter Hey, I'm having an issue at starting a volume (that used to work...), and gluster is totally silent about the errors it encounters
10:28 HeisSpiter Is there any way I can debug it? Find what's the issue?
10:38 FIBERIT joined #gluster
10:38 FIBERIT Hi
10:38 glusterbot FIBERIT: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:39 FIBERIT can anyone help me install gluster on oVirt 3.3 ?
10:41 samppah FIBERIT: do you want to use Ovirt to manage glusterfs or using gluster as storage?
10:44 FIBERIT Let me explain
10:44 FIBERIT We have 1 Engine + 2 nodes
10:45 FIBERIT we need to use storage from current space located on nodes
10:48 Guest72372 joined #gluster
10:52 prasanth joined #gluster
10:54 FIBERIT samppah ?
10:55 hybrid512 joined #gluster
11:04 samppah FIBERIT: sorry, had to focus on $dayjob for a moment
11:05 FIBERIT ok
11:07 samppah FIBERIT: so basicly you want to run oVirt hypervisors and glusterfs server on same machine?
11:12 satheesh joined #gluster
11:26 X3NQ joined #gluster
11:27 khushildep joined #gluster
11:30 rahulcs joined #gluster
11:37 dusmant joined #gluster
11:39 Pavid7 joined #gluster
11:43 psharma joined #gluster
11:51 Nuxr0 joined #gluster
11:52 aravindavk joined #gluster
11:52 tjikkun_work_ joined #gluster
11:52 ajha joined #gluster
11:53 davinder joined #gluster
11:55 ndarshan joined #gluster
11:55 fyxim_ joined #gluster
11:55 rogierm joined #gluster
11:56 ctrianta joined #gluster
11:56 badone joined #gluster
11:56 delhage joined #gluster
11:56 delhage joined #gluster
11:56 JordanHackworth joined #gluster
11:56 mkzero joined #gluster
11:56 satheesh2 joined #gluster
11:56 delhage joined #gluster
11:57 ricky-ticky1 joined #gluster
11:58 Guest37849 joined #gluster
12:00 tdasilva joined #gluster
12:18 andreask joined #gluster
12:28 calum_ joined #gluster
12:29 satheesh joined #gluster
12:31 ricky-ticky joined #gluster
12:34 spandit joined #gluster
12:34 al joined #gluster
12:34 ndarshan joined #gluster
12:35 shubhendu joined #gluster
12:39 ricky-ticky1 joined #gluster
12:42 Pavid7 joined #gluster
12:45 TvL2386 joined #gluster
12:47 social kkeithley: ping, how is it with deprecating options, should it be dropped completly when I'm removing it or should I create some warning, It seems to be there when I look at the code it'll log it but that's probably all
12:50 kkeithley I'm not sure what our policy is. I suggest asking in #gluster-dev. Hopefully someone who knows will respond.
12:52 kkeithley1 joined #gluster
12:52 abyss^ joined #gluster
12:54 harish joined #gluster
12:56 saurabh joined #gluster
13:00 stickyboy joined #gluster
13:05 benjamin_____ joined #gluster
13:14 cjanbanan joined #gluster
13:23 tdasilva left #gluster
13:28 nightwalk joined #gluster
13:30 edward1 joined #gluster
13:39 mattapperson joined #gluster
13:49 Guest559 joined #gluster
13:49 diegows joined #gluster
13:51 jiffe98 joined #gluster
13:53 rahulcs joined #gluster
13:53 plarsen joined #gluster
13:55 SteveCooling joined #gluster
13:57 saurabh joined #gluster
14:04 dbruhn joined #gluster
14:08 nightwalk joined #gluster
14:09 sroy joined #gluster
14:10 sroy joined #gluster
14:13 ndk joined #gluster
14:16 bennyturns joined #gluster
14:24 khushildep joined #gluster
14:27 davinder joined #gluster
14:31 sticky_afk joined #gluster
14:31 stickyboy joined #gluster
14:32 rahulcs joined #gluster
14:39 rahulcs joined #gluster
14:40 dusmant joined #gluster
14:45 theron joined #gluster
14:46 nightwalk joined #gluster
14:46 theron_ joined #gluster
14:47 zaitcev joined #gluster
14:47 rahulcs joined #gluster
14:51 benjamin_ joined #gluster
14:52 kdhananjay joined #gluster
14:54 sas_ joined #gluster
15:00 japuzzo joined #gluster
15:01 vpshastry joined #gluster
15:02 doekia @HeisSpiter: what is the output of gluster volume status?
15:02 lalatenduM joined #gluster
15:02 kkeithley gluster community meeting in #gluster-meeting starting now
15:07 bugs_ joined #gluster
15:08 dbruhn_ joined #gluster
15:08 cp0k_ joined #gluster
15:09 Philambdo joined #gluster
15:14 aravindavk joined #gluster
15:14 jobewan joined #gluster
15:14 tdasilva joined #gluster
15:18 plarsen joined #gluster
15:18 sahina joined #gluster
15:20 ccha where can I find any documentation about ./glusterfs/indices/xattrop ?
15:24 cp0k joined #gluster
15:26 nightwalk joined #gluster
15:31 aravindavk joined #gluster
15:34 flrichar joined #gluster
15:39 cjanbanan joined #gluster
15:39 mattappe_ joined #gluster
15:41 chirino joined #gluster
15:54 vpshastry joined #gluster
15:56 RayS joined #gluster
15:57 kdhananjay joined #gluster
15:58 aravindavk joined #gluster
15:59 lmickh joined #gluster
16:00 rpowell joined #gluster
16:01 kaptk2 joined #gluster
16:02 sprachgenerator joined #gluster
16:05 jbrooks Hey guys, I have a gluster/networking Q -- I want to move my gluster traffic from my current, single network, to a separate, storage-only network. Any tips on the best way to do that? For instance, do I give my peers addresses on the new network, and then peer probe at those addresses? Or if I just mount from the new address will gluster figure it out?
16:06 glusterbot New news from newglusterbugs: [Bug 1073023] glusterfs mount crash after remove brick, detach peer and termination <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073023>
16:07 satheesh joined #gluster
16:08 jclift jbrooks: Gluster can only use one IP address per host, so you'll need to design your connectivity around that.
16:08 purpleidea jbrooks: using DNS is probably best... that way you shouldn't need to update things... remember that the clients should be on the "storage" network, as this is where most of the traffic happens
16:09 jbrooks jclift: ok, so, basically, I'm taking down peers, re-adding them w/ their new addresses?
16:10 jbrooks purpleidea: this is for my ovirt+gluster hookup
16:10 jbrooks So the clients will be the peers, as well
16:10 sticky_afk joined #gluster
16:10 stickyboy joined #gluster
16:10 ccha is there any documenation about the ./glusterfs/indices/xattrop folder ?
16:10 purpleidea jbrooks: ah. okay. well so if you can peer using fqdn's, i think that's recommended. that's what i do, and you then don't have to worry about changing IP's.
16:11 purpleidea (although for a live migration to new ip's i'd still only change one dns entry at a time, or just schedule downtime and shutdown the hosts, change the ip's in dns and then bring up.
16:13 ndevos jbrooks: http://hekafs.org/index.php/2013/01/sp​lit-and-secure-networks-for-glusterfs/
16:13 jbrooks purpleidea: well, I have fqdn's that I want to leave on their current network, so I'll have to do some disassembly/reassembly, I think, but that's ok
16:13 jbrooks ndevos: I'll check that out, thanks
16:14 purpleidea jbrooks: you could do split view DNS at that point...
16:14 ndevos jbrooks: split-dns is one way, fancy iptables rules is an other option
16:15 ndevos jbrooks: the problem is that any glusterfs-client (fuse, nfs, gfapi, ..) receives the .vol file from the storage servers, this .vol file contains ipaddresses or hostnames that were used to create the volume
16:15 ndevos and the client will use that info to connect to the bricks
16:16 dewey joined #gluster
16:17 diegows joined #gluster
16:21 nightwalk joined #gluster
16:25 FIBERIT joined #gluster
16:25 [o__o] joined #gluster
16:27 FIBERIT can anyone help me with an error ovirt + gluster
16:27 FIBERIT ?
16:30 mattappe_ joined #gluster
16:31 jbrooks FIBERIT: maybe, what's the error?
16:32 hagarth joined #gluster
16:33 zerick joined #gluster
16:37 glusterbot New news from newglusterbugs: [Bug 1066778] Make AFR changelog attributes persistent and independent of brick position <https://bugzilla.redhat.co​m/show_bug.cgi?id=1066778>
16:39 chirino_m joined #gluster
16:41 andreask joined #gluster
16:41 HeisSpiter [16:02:44] <doekia> @HeisSpiter: what is the output of gluster volume status? <== it says it is not started
16:42 doekia and volume info ... does it match what you expect?
16:43 johnbot11 joined #gluster
16:44 HeisSpiter It does
16:44 HeisSpiter There as well, it confirms 'Status: stopped'
16:45 doekia you have an output or it just complain such as server it-sel not started
16:46 HeisSpiter I've absolutely nothing
16:46 HeisSpiter Be it in logs or on shell
16:46 HeisSpiter It just states: "failed"
16:46 doekia /etc/init.d/glusterfs-server stop
16:47 doekia then kill all gluster process
16:47 doekia it happens some resist and require -9
16:47 doekia once done restart /etc/init.d/glusterfs-server start
16:49 HeisSpiter Still failing
16:49 Yaz left #gluster
16:49 doekia still empty logs?
16:52 HeisSpiter http://pastebin.com/1beKg19Y
16:52 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:52 HeisSpiter Only this, in cli.log
16:55 HeisSpiter (with correct paste: http://fpaste.org/82658/38514139/ :-))
16:55 glusterbot Title: #82658 Fedora Project Pastebin (at fpaste.org)
16:56 doekia a particular reason to use socket for transport? I have it on tcp personnaly
16:56 HeisSpiter I don't get this error actually
16:56 HeisSpiter When I created the volume I set transport tcp
16:57 HeisSpiter And if you look into volume config, it's set as transport tcp
16:57 rahulcs joined #gluster
16:58 FIBERIT Error ovirt + gluster : http://www.fiberit.ro/files/ovirt.jpg
16:58 doekia almost like it cannot read the volume file ...  let me think
16:59 HeisSpiter If you need any detail about the config, feel free to ask
16:59 rahulcs joined #gluster
17:00 semiosis doekia: whats your question about debs?
17:00 doekia @semiosis: debs? you talk about my comment during the meeting?
17:01 semiosis right
17:02 doekia I have found pretty surprising that the init.d script is dependent of $remote_fs ... sounds like gluster-server is the $remote_fs provider no?
17:03 nightwalk joined #gluster
17:04 semiosis doekia: is there a problem you're trying to solve?  or just looking for a problem to solve? ;)
17:04 semiosis you know... i did this, i expected one thing, but something else happened instead
17:05 semiosis i'm no expert on initscripts, but i can look into it if you can give me steps to reproduce a problem
17:06 doekia ;-) I can't get proper rc3.d sequence ... I actually relies on some extra mount -a in my rc.local
17:06 semiosis doekia: did you add _netdev to the options for your glusterfs mount in fstab?  i think debian needs that (although ubuntu does not)
17:06 doekia while looking at the cause this Required-Start: statement puzzled me, hence my question
17:06 doekia Yes _netdev
17:07 semiosis hmm
17:08 semiosis doekia: can you do this test?  truncate or delete the client log file, reboot, and pastie the log file
17:09 semiosis doekia: it's possible the mount *is* being tried at boot, but is failing for some other reason, then when it is retried again by rc.local it works
17:09 semiosis the log would show if there is a failed attempt
17:09 semiosis sometimes this happens as a result of the network not being ready, which could be for various reasons
17:09 doekia sure but ... the log did not get active during startup sequence ... rsyslog start is S15, gluster also
17:10 doekia I'm booting right now
17:10 semiosis doekia: glusterfs doesnt use syslog
17:10 semiosis it writes directly to logs
17:11 doekia sure but other logs depends on rsyslog if we want sort of timeframe map
17:12 Pavid7 joined #gluster
17:15 semiosis one thing at a time
17:17 dbruhn joined #gluster
17:18 lalatenduM joined #gluster
17:18 doekia which log do you want I paste?
17:18 doekia nfs.log
17:19 doekia the nfs.log http://fpaste.org/82670/13940399/
17:19 glusterbot Title: #82670 Fedora Project Pastebin (at fpaste.org)
17:20 rahulcs joined #gluster
17:27 rpowell joined #gluster
17:29 hagarth joined #gluster
17:30 Mo__ joined #gluster
17:32 TvL2386 joined #gluster
17:33 Matthaeus joined #gluster
17:34 vpshastry joined #gluster
17:35 latha joined #gluster
17:41 dbruhn joined #gluster
17:41 HeisSpiter doekia, still around?
17:41 doekia yes...
17:42 ctrianta joined #gluster
17:42 HeisSpiter Does glusterfs needs to start a daemon listening on some port for starting a volume?
17:46 HeisSpiter (trying to strace to get some info...)
17:47 nightwalk joined #gluster
17:51 rpowell joined #gluster
17:54 HeisSpiter Here it is, just in case it may help...
17:54 HeisSpiter http://fpaste.org/82679/40420611/
17:54 glusterbot Title: #82679 Fedora Project Pastebin (at fpaste.org)
17:55 rpowell1 joined #gluster
17:57 rahulcs joined #gluster
17:59 sputnik13 joined #gluster
18:03 diegows joined #gluster
18:04 sputnik13 joined #gluster
18:06 Joe630 raaaaaaaar
18:06 Joe630 gluster volume set <VOLNAME> nfs.disable off
18:07 Joe630 double negative config options must die
18:08 samppah =)
18:08 Joe630 wait i said that wrong
18:08 Joe630 double negative config options must not live
18:09 Joe630 no negative config options must not die
18:09 samppah i'm getting even more confused than nfs.disable off
18:10 Joe630 exactly.
18:11 rahulcs joined #gluster
18:14 DV joined #gluster
18:14 semiosis doekia: sorry i'm going to be busy for a while.  are you using a glusterfs fuse mount?  i need the log from that. please pastie it & i'll review when i have a chance later
18:16 kris joined #gluster
18:24 sputnik13 joined #gluster
18:27 nueces joined #gluster
18:29 aravindavk joined #gluster
18:31 nightwalk joined #gluster
18:32 sputnik13 joined #gluster
18:34 rossi_ joined #gluster
18:40 rahulcs joined #gluster
18:40 doekia @semiosis: no I use nfs client mount (should I be able to achieve similar performances I would have prefered fuse but to date ...)
18:45 sputnik13 joined #gluster
18:45 semiosis doekia: what is your exact line from fstab?
18:45 doekia localhost:/www/var/wwwnfs vers=3,proto=tcp,nolock,defaults,fsc,_netdev0 0
18:46 rpowell joined #gluster
18:47 semiosis first of all, that's incomplete, and I'm too busy to go back & forth asking for each part of the line
18:47 doekia the line is complete!
18:47 semiosis second of all, you should *never* mount nfs from localhost -- it will deadlock under load
18:48 semiosis doekia: that line is missing both a mount point & a fstype
18:48 rpowell2 joined #gluster
18:48 semiosis what kind of debian is this???
18:48 semiosis is this even an fstab file?
18:48 doekia debian7
18:48 HeisSpiter it has o_O
18:48 semiosis am I missing some spaces?
18:48 semiosis sorry
18:48 HeisSpiter sounds like
18:49 doekia localhost:/www the volume
18:49 semiosis i see one big (and weird) remote nfs path
18:49 doekia /var/www the mount pount
18:49 doekia nfs the type
18:49 semiosis oh ok
18:49 semiosis well it's all run together
18:49 semiosis anyway, the larger issue is the localhost nfs mount, don't do that
18:50 doekia we already talked about that yesterday
18:51 doekia to date I have done billions of tests crashing system on purpose, bringing network down restarting during copy etc ... I haven't faced any probleme at all
18:51 semiosis could you please give me a link to the conversation in the channel logs? so i can catch up?
18:52 doekia ?? sorry I don't have a clue what you are talking about ??
18:52 semiosis irc channel logs, from the /topic
18:53 semiosis with the conversation yesterday about localhost nfs mounts
18:53 doekia :-( sorry I don't know how to do that... was an exchange with ndevos
18:55 doekia could it be that you are asking for? http://irclog.perlgeek.de/g​luster/2014-03-03#i_8376000
18:55 glusterbot Title: IRC log for #gluster, 2014-03-03 (at irclog.perlgeek.de)
18:56 semiosis doekia: https://botbot.me/freenode/gluster/msg/11647718/
18:56 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
18:56 semiosis and ndevos even said to not make a localhost nfs mount!
18:57 semiosis https://botbot.me/freenode/gluster/msg/11647586/
18:57 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
18:58 doekia he says put nolock to ensure locking be done by gluster
18:59 doekia and again I do not have nfs server just gluster with nfs xlator
18:59 semiosis gluster is your nfs server
18:59 doekia thrue
18:59 semiosis you have an nfs server, it is gluster
18:59 doekia true
18:59 JonathanD joined #gluster
19:00 semiosis well that nolock thing is interesting.  never heard of that before
19:00 doekia ;-) to be honest al test done here are impressive no data corruption no problem
19:01 doekia only problem is performances and this small glitch requiring rc.local (I can live w/ this one )
19:02 doekia I run stock php app and fuse client is not as good as nfs client
19:02 semiosis does your app write through the nfs mount?
19:02 semiosis or just read?
19:03 doekia and adding cachefilesd make it almost good
19:03 elyograg gluster volume status volname nfs clients does not appear to actually work.
19:03 doekia yes read + some write (few to be honest) ... sessions/tmp is on another fs,
19:04 yosafbridge joined #gluster
19:04 elyograg our mrs. reynolds has joined us.
19:06 elyograg I'd really like to know what Whedon was going to do with her.
19:11 semiosis doekia: well if you're happy with your nfs setup then good for you.  the usual recommendation is to use a fuse mount & tune your web server (when you can't tune the app) for speed
19:11 semiosis doekia: all in all I dont see any indication there's a problem with the deb packages
19:12 semiosis doekia: i wonder how well this works, if at all, on redhat distros
19:12 semiosis can anyone give that a try?  a localhost gluster nfs mount at boot time... does it work?
19:12 doekia I have asked here for ages to no avail ... with fuse my apps takes 6s while it take 500ms on SSD and 900ms with nfs client + cachefilefs
19:12 semiosis kkeithley: ^^ ?
19:13 doekia I'll be happy to go fuse if ever I can find proper doc or help :-)
19:13 semiosis doekia: you can tune APC to not stat files
19:13 semiosis doekia: you can tune your php include path to have the most likely path first, least likely last
19:13 irctc720_ joined #gluster
19:13 doekia it is already
19:14 semiosis if apc isn't stating then it shouldn't need to go to the fs at all to include files
19:14 doekia I have used that to bench http://www.gluster.org/pipermail/gl​uster-users/2013-April/035940.html
19:14 glusterbot Title: [Gluster-users] Low (<0.2ms) latency reads, is it possible at all? (at www.gluster.org)
19:14 doekia And franckly no doubt nfs client is far beyond fuse
19:17 semiosis nfs does attribute caching by default (you can disable this with 'noac' option).  but if you set APC correctly to disable stat, it shouldn't even *ask* for attributes, regardless of whether they're cached in the vfs
19:17 levkray joined #gluster
19:17 semiosis anyway, gotta get back to work, bbl
19:17 doekia tx
19:19 nightwalk joined #gluster
19:25 Matthaeus1 joined #gluster
19:31 khushildep joined #gluster
19:37 JoeJulian localhost gluster nfs mounts have been problematic in the past due to kernel memory management deadlocks. No idea if those have been addressed in the kernel though.
19:38 elyograg JoeJulian: I haven't gotten anywhere with getting help on directory healing.  do you know what I need to do?
19:38 glusterbot New news from newglusterbugs: [Bug 1073111] %post install warning for glusterfs-server that it can't start /etc/init.d/glusterfsd (on EL6) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073111>
19:39 JoeJulian elyograg: do you still have the link to a "getfattr -m . -d -e hex" of one of those directories?
19:39 elyograg I'll get one.
19:43 elyograg http://fpaste.org/82704/94048590/
19:43 glusterbot Title: #82704 Fedora Project Pastebin (at fpaste.org)
20:04 B21956 joined #gluster
20:06 primechuck Is there a way to see why a file isn't healing?  There is a backing file that is constantly being healed, but doesn't appear to have errors outside of gluster or split-brain inside gluster
20:06 nightwalk joined #gluster
20:08 JoeJulian elyograg: See if you can touch mdfs/RNI/rniphotos/docs/030 and affect those attributes in any way.
20:08 elyograg via the mount or on a brick?
20:08 JoeJulian mount
20:10 elyograg that did not appear to change anything.
20:11 cp0k joined #gluster
20:11 cjanbanan joined #gluster
20:13 JoeJulian then I would just delete the trusted.afr attributes from directories.
20:13 elyograg I tried that.  Still in the heal info list.
20:14 B21956 joined #gluster
20:14 HeisSpiter What if I completely delete my glusterfs configuration, but keep data on replicated nodes (2xY nodes)
20:14 elyograg I haven't gone through all the directories yet to make sure they're all OK, but I suspect they will be.
20:14 HeisSpiter If I recreate volumes, will it erase data, or keep on going with them?
20:14 HeisSpiter As if nothing happened?
20:17 elyograg should have the same data.  You have to remove xattrs from the root brick directories or it will complain about the brick already being part of a cluster.
20:18 HeisSpiter the .glusterfs dir?
20:18 elyograg no, the one above that.
20:24 JoeJulian @path or prefix
20:24 glusterbot JoeJulian: http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
20:24 JoeJulian HeisSpiter: ^^
20:24 HeisSpiter ok
20:24 HeisSpiter good to know
20:24 HeisSpiter Will try that tomorrow
20:25 HeisSpiter thanks
20:25 HeisSpiter (still trying to recover my thing)
20:26 JoeJulian Heh phrasing immediately brought http://en.wikipedia.org/wi​ki/John_and_Lorena_Bobbitt to mind.
20:26 glusterbot Title: John and Lorena Bobbitt - Wikipedia, the free encyclopedia (at en.wikipedia.org)
20:30 rwheeler joined #gluster
20:31 qdk joined #gluster
20:33 aixsyd_ joined #gluster
20:33 aixsyd_ JoeJulian: Got another getfattr issue you can probably shed light on
20:34 aixsyd_ http://fpaste.org/82737/40516401/
20:34 glusterbot Title: #82737 Fedora Project Pastebin (at fpaste.org)
20:34 Matthaeus joined #gluster
20:34 aixsyd_ http://fpaste.org/82738/13940516/  <-- perpetual needing to be healed only on one node? o.O
20:34 glusterbot Title: #82738 Fedora Project Pastebin (at fpaste.org)
20:38 elyograg JoeJulian: I am starting to suspect that I need to verify things re the same, remove the trusted.afr attributes, and then find the file in xattrop and delete it.
20:39 elyograg deleting attributes and then running 'stat' doesn't take care of it.  if I wait ten minutes for the daemon to wake up, it still doesn't get removed from the lsit.
20:40 JoeJulian Are you sure that's not just activity?
20:40 aixsyd_ me?
20:40 JoeJulian yes
20:40 JoeJulian sorry
20:40 aixsyd_ theres 0 io activity
20:41 JoeJulian Also on a phone call and trying to solve a routing issue with openvswitch.
20:41 aixsyd_ np =)
20:41 JoeJulian aixsyd_: if you try stat'ing the file on a fuse client, is there anything in the client log about it?
20:42 aixsyd_ also,"stat"ing?
20:42 JoeJulian elyograg: makes sense. For that matter, maybe even just deleting it from xattrop.
20:42 aixsyd_ er, stating?
20:43 JoeJulian stat $file
20:43 aixsyd_ thats simple XD
20:43 cp0k JoeJulian: Hey, me again....so I finally was able to fix all my split-brain, add my new peers / storage nodes in. What I did next was start the 'fix-layout' via the "two process" route. One to fix layout, and two to merge data. At first the fix layout process was running fine and production performance was not hurt, but this morning we saw major slowness and had to stop the fix layout process. We also mounted each brick read only on the clients which allowed the
20:44 aixsyd_ JoeJulian: nothing in client log. stat outputs correctly
20:44 aixsyd_ Stat ouput: http://fpaste.org/82742/05228613/
20:44 glusterbot Title: #82742 Fedora Project Pastebin (at fpaste.org)
20:44 cp0k JoeJulian: If I were to restart the original fix layout command, will it resume where it had originally left off? or start all over again?
20:46 cp0k JoeJulian: and if I were to instead issue the command with the two in one process of "fix layout and migrate data" will that cause any harm to gluster?
20:46 aixsyd_ dbruhn: yo! where ya been the last few days? :P
20:50 nightwalk joined #gluster
20:50 cp0k JoeJulian: when I say "two in one process to fix layout and migrate data" I mean this - http://gluster.org/community/documentation​/index.php/Gluster_3.2:_Rebalancing_Volume​_to_Fix_Layout_and_Migrate_Existing_Data
20:50 glusterbot Title: Gluster 3.2: Rebalancing Volume to Fix Layout and Migrate Existing Data - GlusterDocumentation (at gluster.org)
20:50 irctc720 joined #gluster
20:51 elyograg cp0k: from what I've seen, it will do the fix-layout from the beginning.  The migrate should only move things that need to be moved.  I hope you're on 3.4.x, there are some very bad bugs with rebalance in 3.3.x.
20:51 rpowell2 left #gluster
20:52 cp0k elyograg: yes, I recently upgraded to 3.4.2
20:53 aixsyd_ any news on 3.5?
20:53 cp0k elyograg: at this point my production env is stable again and I would like to restart the fix layout and migrate data asap since im down to about 1.4TB on each existing brick
20:54 cp0k elyograg: I'm just curious if I should restart the process via what I originally started - http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Rebala​ncing_Volume_to_Fix_Layout_Changes
20:54 glusterbot Title: Gluster 3.2: Rebalancing Volume to Fix Layout Changes - GlusterDocumentation (at gluster.org)
20:54 cp0k elyograg: or http://gluster.org/community/documentation​/index.php/Gluster_3.2:_Rebalancing_Volume​_to_Fix_Layout_and_Migrate_Existing_Data
20:54 glusterbot Title: Gluster 3.2: Rebalancing Volume to Fix Layout and Migrate Existing Data - GlusterDocumentation (at gluster.org)
20:57 elyograg doing a rebalance start will probably do a fix-layout even if you've already done one, so unless you just want to make sure that the fix-layout completes on its own, I don't see much reason to do it separately.
20:57 rpowell joined #gluster
21:01 cp0k I see the fix-layout start moving content to my new storage nodes
21:01 cp0k which tells me that it was doing fix-layout and data migration already in one command?
21:02 elyograg is it the fix-layout, or do you have processes writing data to the volume?  Because after fix-layout happens, any new data will end up on the new bricks.
21:02 elyograg well, some of it.  the data that hashes to those bricks.
21:03 kkeithley semiosis: ?
21:03 kris joined #gluster
21:03 cp0k I see, so basically fix-layout will update the mappings in .glusterfs/ data and then distribute that data to the new bricks accordingly?
21:03 semiosis kkeithley: on rpm distros (fedora, rhel, cent) does a localhost nfs mount of gluster-nfs work at boot time?
21:04 semiosis kkeithley: doekia was complaining this doesnt work on debian, and i'm inclined to say "
21:04 semiosis "NOTABUG"
21:04 elyograg yes.  it won't actually move existing data, you have to either start the migration or something has to rewrite the data.
21:04 dbruhn aixsyd_, sorry dude been dealing with buying a competitor
21:04 dbruhn I'll have some new gluster going in because this!
21:04 aixsyd_ noiice
21:05 shapemaker joined #gluster
21:05 dbruhn I am talking about an SSD system potentially, about to do a single NFS server to test reliability first under my app
21:05 cp0k elyograg: I see, yet Im still not sure what path to take next
21:06 kkeithley IIRC, JoeJulian and I tried to get the right options into the glusterd.service file so that -o netdev would work. ISTR that it was working for me in VMs. JoeJulian didn't have the same results.
21:06 cp0k elyograg: should I re-run the fix-layout and let it finish this time ( provided there is no production performance hits like there was the first time around causing me to have to stop it) or do I fire up a brand new  fix-layout / migrate data
21:07 cp0k elyograg: if Gluster starts the whole process of crawling itself from scratch, I dont see any reason not to do fix layout and data migration at the same time
21:07 cp0k elyograg: do you agree?
21:07 aixsyd_ dbruhn: :) heading out! JoeJulian - I fixed it again with setfattr to all 0's. I think I'm good now. this problem cropped up when I didnt have a proper infiniband connection between the nodes, and all of my questions have been because of that. finally, i think its all okay.
21:07 aixsyd_ take care guys!
21:07 JoeJulian +1
21:09 semiosis kkeithley: talking about mount type nfs in fstab, not a glusterfs fuse client.  is that what you're talking about too?
21:10 semiosis JoeJulian: have an opinion on this?
21:10 kris joined #gluster
21:10 elyograg cp0k: I would just start the full migration, myself.  you can stop it either way if it becomes a problem.
21:10 kkeithley AFAIK -netdev is supposed to make it wait for network to be up before trying to mount. I didn't think it mattered whether it was nfs or glusterfs
21:11 elyograg wow, doing rm -rf on an old mount point takes forever.  it's in .glusterfs and DRAGGING.  If I could just remake the whole filesystem, I would.
21:12 elyograg s/mount point/brick directory/
21:12 glusterbot What elyograg meant to say was: wow, doing rm -rf on an old brick directory takes forever.  it's in .glusterfs and DRAGGING.  If I could just remake the whole filesystem, I would.
21:12 JoeJulian Still on the phone...
21:12 andreask joined #gluster
21:13 cp0k elyograg: thanks
21:13 JoeJulian nfs happens during the same pass as netdev, iirc.
21:13 cp0k elyograg: yea unfortunately rm is crazy slow :(
21:18 JoeJulian Right... rc.sysinit skips mounting several known network filesystems (we should get that patched to include glusterfs) including nfs, regardless of whether or not netdev is set. Later the netfs init script mounts the network filesystems and _netdev.
21:21 farnosov joined #gluster
21:40 rpowell1 joined #gluster
21:42 sputnik13 joined #gluster
21:44 chirino joined #gluster
21:48 nightwalk joined #gluster
21:48 rpowell joined #gluster
21:52 rpowell1 joined #gluster
22:00 Matthaeus1 joined #gluster
22:11 zerick joined #gluster
22:14 chirino_m joined #gluster
22:22 diegows joined #gluster
22:24 rpowell joined #gluster
22:26 kris joined #gluster
22:32 nightwalk joined #gluster
22:35 tdasilva left #gluster
22:45 chirino joined #gluster
22:45 rpowell1 joined #gluster
22:50 kris joined #gluster
22:56 velladecin Gents, I've just encountered a strange problem. I have 4 x (1x1) replicated distributed volume. When I reboot any single server the complete gluster mount becomes unavailable on all servers. After the server boots back up the mount returns to normal. I can do basic 'info/status' operations but eg 'gluster vol heal <VOL> info' does not work etc. Anybody encoutered this?
22:57 velladecin there's nothing in the logs and I'm sure that the a loss of single server should not affect the whole volume?!
22:58 junaid joined #gluster
22:59 gdubreui joined #gluster
23:03 elyograg velladecin: can you use a paste website (fpaste.org is a good choice) to share your 'gluster volume info' output?
23:03 elyograg glusterbot probably has a quick thing that can spit that out, can't remember what it is. ;)
23:03 elyograg ~paste
23:04 elyograg oh well.
23:07 JoeJulian @paste
23:07 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
23:09 ProT-0-TypE joined #gluster
23:23 nightwalk joined #gluster
23:28 elyograg that's useful, but I thought there was another one.
23:28 elyograg @info
23:28 glusterbot elyograg: Error: The command "info" is available in the Factoids, MessageParser, and RSS plugins. Please specify the plugin whose command you wish to call by using its name as a command before "info".
23:28 semiosis ,,(pasteinfo)
23:28 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
23:28 elyograg ah, there it is.
23:31 theron joined #gluster
23:33 doekia any trick to have glusterfs client mount ignore mount options not related to glusterfs? i.e: fsc
23:34 JoeJulian fuse can't use fsc anyway
23:35 zerick joined #gluster
23:37 elyograg is there a way to rifle quickly through a brick and determine whether anything has trusted.afr attributes, displaying filenames that do?
23:37 elyograg well, as quickly as xfs can do it, anyway.
23:39 glusterbot New news from newglusterbugs: [Bug 1073168] The Gluster Test Framework could use some initial sanity checks <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073168>
23:40 JoeJulian elyograg: If I were doing it, I would use python.
23:41 JoeJulian If you're even lightly familiar with python, you can probably pick what you need out of
23:41 JoeJulian http://joejulian.name/blog/quick-and-d​irty-python-script-to-check-the-dirty-​status-of-files-in-a-glusterfs-brick/
23:41 glusterbot Title: Quick and dirty python script to check the dirty status of files in a GlusterFS brick (at joejulian.name)
23:41 elyograg I don't know python.  I'm good with perl, almost good with shell.
23:41 elyograg pretty decent at picking up new languages, though.
23:41 JoeJulian There's a million ways to do it in perl.
23:42 divbell only one right way
23:42 JoeJulian ... only because there's a million ways to do *anything* in perl.
23:42 velladecin ok, will paste it now
23:44 elyograg working on a shell script I'm calling 'nukeafr' to use once I have a list of files.  http://fpaste.org/82807/39406305/
23:44 glusterbot Title: #82807 Fedora Project Pastebin (at fpaste.org)
23:47 elyograg already got an error.  good thing to use echo. :)
23:49 jbrooks Hey guys, I'm trying to remove a node from my cluster of three -- I did gluster peer detach $hostname-of-peer-I-want-to-remove -- but when I ran gluster peer status, I get: Connection failed
23:50 jbrooks Meanwhile, over on the node I meant to remove, gluster peer status shows my other two nodes...
23:50 * JoeJulian whacks jbrooks for improper use of the word "node". ;)
23:50 jbrooks :)
23:50 jbrooks Uh oh
23:51 elyograg JoeJulian: that python script you just linked ... does it operate on CWD?
23:51 JoeJulian jbrooks: is glusterd still running on all your servers?
23:51 elyograg or do I give it a path?
23:52 JoeJulian path. That's argv[1]
23:52 velladecin here is the fpaste link http://fpaste.org/82809/40635151/
23:52 glusterbot Title: #82809 Fedora Project Pastebin (at fpaste.org)
23:52 elyograg just as you said that, I noticed the 'usage' part. :)
23:52 jbrooks JoeJulian: yes it is
23:54 elyograg ooh, a dependency hunt.
23:54 elyograg ImportError: No module named xattr
23:54 JoeJulian pyxattr
23:55 JoeJulian jbrooks: without spending time to do a lot more debugging, I would probably restart all glusterd and see where that leaves you.
23:55 jbrooks JoeJulian: cool
23:59 elyograg JoeJulian: the script as-is doesn't seem to see a file that I know has trusted.afr attributes. should it?
23:59 cjanbanan joined #gluster
23:59 elyograg it's a dir ... does this by chance only look at files?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary