Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 ninkotech_ joined #gluster
00:07 ninkotech joined #gluster
00:16 StarBeast joined #gluster
01:19 nueces joined #gluster
01:34 nueces_ joined #gluster
01:39 nueces__ joined #gluster
02:05 nasso joined #gluster
02:14 itisravi joined #gluster
02:16 harish joined #gluster
02:17 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
02:28 bala joined #gluster
02:35 bharata-rao joined #gluster
02:41 luisp joined #gluster
02:44 vpshastry joined #gluster
02:48 mohankumar joined #gluster
02:55 kevein joined #gluster
02:56 rc10 joined #gluster
03:07 kshlm joined #gluster
03:16 lpabon joined #gluster
03:23 shylesh joined #gluster
03:23 vpshastry joined #gluster
03:26 vpshastry left #gluster
03:28 hagarth joined #gluster
03:30 shubhendu joined #gluster
03:34 raghu joined #gluster
03:36 asias joined #gluster
03:38 nueces__ joined #gluster
03:47 ppai joined #gluster
03:47 kanagaraj joined #gluster
03:51 itisravi joined #gluster
03:53 luisp joined #gluster
03:56 davinder joined #gluster
03:59 itisravi joined #gluster
04:08 sgowda joined #gluster
04:08 vpshastry joined #gluster
04:17 meghanam joined #gluster
04:17 meghanam_ joined #gluster
04:24 satheesh1 joined #gluster
04:29 kPb_in_ joined #gluster
04:29 RameshN joined #gluster
04:35 ndarshan joined #gluster
04:36 ninkotech_ joined #gluster
04:36 ninkotech joined #gluster
04:38 dusmant joined #gluster
04:39 satheesh joined #gluster
04:53 AliRezaTaleghani joined #gluster
05:16 ninkotech_ joined #gluster
05:16 ninkotech joined #gluster
05:22 bala joined #gluster
05:25 anands joined #gluster
05:28 atrius joined #gluster
05:32 rjoseph joined #gluster
05:32 aravindavk joined #gluster
05:33 micu2 joined #gluster
05:39 ndarshan joined #gluster
05:40 nshaikh joined #gluster
05:45 lalatenduM joined #gluster
05:46 rc10 joined #gluster
05:50 DV joined #gluster
06:06 rastar joined #gluster
06:07 ndarshan joined #gluster
06:16 mohankumar joined #gluster
06:16 psharma joined #gluster
06:23 kPb_in joined #gluster
06:24 sgowda joined #gluster
06:28 vimal joined #gluster
06:31 jtux joined #gluster
06:36 satheesh1 joined #gluster
06:40 tziOm joined #gluster
06:55 sgowda joined #gluster
06:55 AndreyGrebenniko joined #gluster
06:57 AndreyGrebenniko Hi there people! I'd like to start using Gluster in OpenStack. I understood almost all concepts how it works, but it is still unclear how to get the FS available on compute nodes. Should I mount the clustered fs from single gluster node? What happens then if this particular node dies
07:00 eseyman joined #gluster
07:02 samppah AndreyGrebenniko: if you use native glusterfs client then it connects to all servers.. it only uses one server to fetch volume information
07:02 samppah there is backupvolfile-server mount option and you can specify another server for volume information
07:03 ekuric joined #gluster
07:03 andreask joined #gluster
07:05 AndreyGrebenniko samppah, does it mean tha gluster client always knows about all the servers and it doesn't loose connection to the mountpoint if one server dies?
07:06 samppah AndreyGrebenniko: that's correct
07:06 keytab joined #gluster
07:07 samppah however, if one server dies client will wait for shor time it to come back online
07:07 samppah during that time IO is halted
07:07 samppah it's 42 seconds by default and you can set network.ping-timeout value to reduce it
07:09 AndreyGrebenniko samppah, tell me please whether mount.glusterfs is the native client (sorry in avdance), or they are different things?
07:09 samppah AndreyGrebenniko: mount.glusterfs is the native client
07:09 samppah glusterfs also has built in NFS server :)
07:11 AndreyGrebenniko samppah, well, in order to get the client working in case of any failures of the cluster, I have to point backupvolfive-server as an array of all members, right?
07:13 samppah AndreyGrebenniko: in theory, yes.. another option is to use some kind of round robin DNS to fetch mount information
07:14 samppah but i don't know how necessary that is.. i like to fix dead servers as soon as possible and since it's only used to fetch volume information for mounting i think that one server specified in backupvol-file server is enough :)
07:14 shyam joined #gluster
07:15 AndreyGrebenniko samppah, clear, thanks. Let me ask one more question. If I want to get the object server, I have to set up external swift and feed it by the glusterfs mounted to this server. Am I right?
07:15 davinder joined #gluster
07:16 samppah AndreyGrebenniko: sorry, i'm not familiar with object storage thingies in gluster :(
07:16 ricky-ticky joined #gluster
07:17 AndreyGrebenniko samppah, I mean gluster engine doesn't have some kind of native object service
07:19 samppah AndreyGrebenniko: there is object service in glusterfs but I'm not sure if it's automatically available or does it require extra configuration
07:20 rastar joined #gluster
07:23 ndarshan joined #gluster
07:24 bashtoni joined #gluster
07:41 Staples84 joined #gluster
07:46 sticky_afk joined #gluster
07:46 stickyboy joined #gluster
07:49 ngoswami joined #gluster
07:52 ndarshan joined #gluster
07:57 shyam left #gluster
08:10 mgebbe_ joined #gluster
08:14 rastar joined #gluster
08:15 shyam joined #gluster
08:20 tryggvil joined #gluster
08:24 StarBeast joined #gluster
08:30 ndarshan joined #gluster
08:42 dusmant joined #gluster
08:42 marcoceppi joined #gluster
08:48 raghu joined #gluster
08:51 ccha2 From glusterfs client, I have a process which access to a file, then I got network problem on glusterfs server, even network is back, and the volume is fine, the process is stuck
08:51 ccha2 I can't kill -9 this file
08:51 ccha2 this process
08:52 ccha2 lsof marks this file as deleted
08:52 ccha2 how can I kill this process without reboot ?
08:53 ctria joined #gluster
09:00 tryggvil joined #gluster
09:06 DV joined #gluster
09:09 msvbhat joined #gluster
09:13 mohankumar ccha2: killing glusterfs client is not allowed?
09:19 ndarshan joined #gluster
09:19 vshankar joined #gluster
09:22 satheesh joined #gluster
09:35 sgowda joined #gluster
09:38 satheesh joined #gluster
09:41 ndarshan joined #gluster
09:50 itisravi joined #gluster
09:51 ngoswami joined #gluster
09:52 shylesh joined #gluster
09:53 itisravi_ joined #gluster
09:53 rjoseph joined #gluster
09:54 spandit joined #gluster
10:12 dusmant joined #gluster
10:14 [o__o] joined #gluster
10:16 RameshN joined #gluster
10:21 glusterbot New news from newglusterbugs: [Bug 1005860] GlusterFS: Can't add a third brick to a volume - "Number of Bricks" is messed up <http://goo.gl/cPBrIK>
10:22 ccha2 mohankumar: I prefer not stop the glusterfs client
10:23 ccha2 everything works fine, except these process which were stuck and that's I can't kill them
10:23 ccha2 I try fuser -k
10:23 ccha2 doesn't work
10:23 ccha2 got these message "Cannot stat file /proc/3351/fd/5: Structure needs cleaning"
10:24 mohankumar ccha2: in my experience if i kill gluster client than process will be terminated
10:27 sgowda joined #gluster
10:29 kkeithley1 joined #gluster
10:39 ngoswami joined #gluster
10:39 psharma joined #gluster
10:45 ndarshan joined #gluster
10:45 satheesh joined #gluster
10:46 rjoseph joined #gluster
10:49 spandit joined #gluster
10:55 raz joined #gluster
10:57 kPb_in_ joined #gluster
11:00 Shri825 joined #gluster
11:01 Shri825 Hi
11:01 glusterbot Shri825: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:01 Shri825 All
11:02 Shri825 I'm trrying to create openstack(devstack) cinder volume on glusterfs mount point
11:03 Shri825 I could not found how cinder create will use glusterfs mount point for creating volume on it
11:03 Shri825 I want to use glustefs as backend for cinder volume
11:03 Shri825 Can Anyone know how  to use gluster+cinder ?
11:03 Shri825 Thanks in Adv. !
11:07 lpabon joined #gluster
11:09 raghu joined #gluster
11:09 RicardoSSP joined #gluster
11:09 RicardoSSP joined #gluster
11:13 jtux joined #gluster
11:13 sgowda joined #gluster
11:21 glusterbot New news from newglusterbugs: [Bug 990028] enable gfid to path conversion <http://goo.gl/1HwiQc>
11:23 mooperd_ joined #gluster
11:24 ppai joined #gluster
11:34 vpshastry joined #gluster
11:40 satheesh1 joined #gluster
11:46 rastar joined #gluster
11:47 rgustafs joined #gluster
11:51 bashtoni I've got a dir for which the logs are telling me several times a second 'background  meta-data self-heal completed' - why would this be?
11:59 Shri825 Anyone tried glusterfs+openstack-cinder volume ?
12:00 dusmant joined #gluster
12:03 ndarshan joined #gluster
12:12 andreask joined #gluster
12:12 kopke joined #gluster
12:19 rastar joined #gluster
12:36 shyam left #gluster
12:37 tryggvil joined #gluster
12:38 dtyarnell shri825: yes it works well
12:38 dtyarnell i have done testing on havana(RDO) + gluster 3.4.0
12:39 dtyarnell http://www.gluster.org/community/docu​mentation/index.php/GlusterFS_Cinder
12:39 glusterbot <http://goo.gl/Q9CVV> (at www.gluster.org)
12:40 dtyarnell you will need to be able to mount via fuse the glusterfs volume on both your cinder-volume-server host and any nova-compute host
12:43 abradley joined #gluster
12:55 vshankar_ joined #gluster
12:57 B21956 joined #gluster
12:58 Shri825 dtyarnell: I'm trying gluster+cinder on same Fedora19 system
12:59 Shri825 dtyarnell: glusterfs is mounted via fuse , now I want use that mount point for creating cinder volume
13:03 B21956 left #gluster
13:03 bennyturns joined #gluster
13:04 spandit joined #gluster
13:14 dtyarnell joined #gluster
13:14 rwheeler joined #gluster
13:15 vpshastry joined #gluster
13:17 ndk joined #gluster
13:19 _Bryan_ joined #gluster
13:23 vpshastry left #gluster
13:28 raz joined #gluster
13:28 raz joined #gluster
13:35 inodb joined #gluster
13:35 duerF joined #gluster
13:36 edward2 joined #gluster
13:40 jclift joined #gluster
13:47 andreask joined #gluster
13:48 itisravi_ joined #gluster
13:49 aliguori joined #gluster
13:49 failshell joined #gluster
13:51 jdarcy joined #gluster
13:52 aravindavk joined #gluster
13:58 saurabh joined #gluster
14:01 vshankar joined #gluster
14:04 johanfi joined #gluster
14:06 johanfi could one use a HP moonshot system with glusterfs? pro/cons?
14:06 johanfi or are there alternative servers that are more optimal
14:12 kkeithley_ In a few months the first aarch64 iron will start to ship. You can certainly use 32-bit iron in the mean time. You'll have to tell us how it goes.
14:13 premera_j_n joined #gluster
14:13 Norky should I ask here about RHSS, or should I try in #rhel?
14:15 msciciel joined #gluster
14:16 bugs_ joined #gluster
14:17 msciciel hi, after one minute of network downtime, gluster process of one the volume started to eating all cpu cores(it happens only on 3 bricks from 6). Gluster volume is distributed with 3 ways replica 2 x 3. I'm using glusterfs-3.3.0.7rhs-1.el6.x86_64. Debugging with strace showed that there is a huge number of lstat calls. Any idea ?
14:22 mistich1 anyone know the best way to watch the throughput of gluster
14:22 kkeithley_ Norky, I doubt there are many people in #rhel who will know about RHS or gluster. If you have an RHS license you can open a GSS ticket to get help.
14:23 l0uis mistich1: check out https://github.com/pcuzner/gluster-monitor, in particular gtop
14:28 nshaikh left #gluster
14:35 kkeithley_ Norky: And you can ask here too
14:38 itisravi_ joined #gluster
14:40 msciciel joined #gluster
14:43 Norky actually, don't worry, I think it's covered sufficiently in the docs
14:43 Norky but ty
14:43 jag3773 joined #gluster
14:44 rc10 joined #gluster
14:51 sprachgenerator joined #gluster
14:53 hflai joined #gluster
14:59 ababu joined #gluster
15:07 lpabon joined #gluster
15:07 phox joined #gluster
15:09 dbruhn I have a 20 brick system and two of my bricks have filled themselves up, is there something I can do to rebalance the system some?
15:12 Alpinist joined #gluster
15:13 Norky well there is the gluster "rebalance" command, have you run that?
15:14 dbruhn I wasn't sure if that was only for when you were adding new nodes, or making configuration changes
15:15 ekuric joined #gluster
15:16 LoudNoises joined #gluster
15:18 zaitcev joined #gluster
15:22 Norky it might help and it wont hurt
15:23 a1 joined #gluster
15:23 kkeithley_ correct, rebalance won't do anything until you add more bricks. It's not fatal to fill up bricks, gluster will write new files on other bricks and write a pointer to the brick where the file really is.
15:23 Norky hmm, really?
15:24 Norky I had a problem whereby some bricks were full while others were not, and the FUSE-mounted clients were getting ENOSPACE
15:25 kkeithley_ that's the theory, yes. ;-) What version were you getting ENOSPACE with?
15:26 Norky RHSS 2.0 so GlsuterFS 3.3.0
15:29 Norky I simply accepted it as intended behaviour and got the users to tidy up some files
15:31 dbruhn I have tried to join the user group mailing list twice now and never gotten approval, anyone in here that can make that happen?
15:40 ctria joined #gluster
15:42 kkeithley_ Norky: I'd open a support ticket. That should not happen and if it isn't already fixed, that's something that really ought to be fixed.
15:49 Norky if I can replicate it once I've upgraded to RHSS 2.1 I will
15:50 anands joined #gluster
15:52 go2k joined #gluster
15:54 go2k hi, have any of you ever seen such a behaviour when creating a volume from the scratch ? :
15:54 go2k volume create: volgluster: failed
15:54 go2k (and no errors in the error log)
15:56 jclift dbruhn: With the user group mailing list, you shouldn't need approval.  You just subscribe, and you "should be good".
15:56 jclift dbruhn: Actually, I
15:57 jclift 'll check the member list, and see if you're there.  Gimme a sec
15:58 dbruhn Everytime I email it, it says I need to be a member and my messages get queued
15:59 al joined #gluster
15:59 dbruhn I am using a different email address than the one you have for me jclift
15:59 jclift dbruhn: And you've subscribed through the UI?
16:00 itisravi_ joined #gluster
16:01 SpeeR joined #gluster
16:09 bala joined #gluster
16:13 aliguori joined #gluster
16:18 RameshN joined #gluster
16:22 glusterbot New news from newglusterbugs: [Bug 1017868] Gluster website needs better instructions for mailing list page <http://goo.gl/62znmD> || [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
16:24 harish_ joined #gluster
16:26 Mo__ joined #gluster
16:26 harish_ joined #gluster
16:27 sprachgenerator joined #gluster
16:37 satheesh1 joined #gluster
16:43 harish_ joined #gluster
16:48 johnmark dbruhn: hey - I could have sworn you were subscribed. did you figure it out?
16:49 vpshastry joined #gluster
17:18 RameshN joined #gluster
17:20 Technicool joined #gluster
17:41 partner joined #gluster
17:44 partner E [glusterfsd-mgmt.c:1559:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
17:44 partner i guess 3.4.0 server does not work with 3.3.1 client after all?
17:45 partner tried to google around a bit but did not yet find any obvious reason or fix for that..
17:46 partner maybe faq already here, pardon me
17:47 partner gluster servers debian wheezy, client debian squeeze on this case
17:49 jclift johnmark: He's subscribed now.
17:52 mtanner joined #gluster
18:10 kanagaraj joined #gluster
18:12 andreask joined #gluster
18:13 semiosis partner: ,,(3.4 upgrade notes)
18:13 glusterbot partner: http://goo.gl/SXX7P
18:15 H__ left #gluster
18:21 jclift joined #gluster
18:25 partner semiosis: is there any particular part you are referring to on that? i read "is compatible with 3.3.x (yes, you read it right)" but fail to find the solution to the issue?
18:26 partner what i have is a brand new freshly installed 3.4.0 gluster to which i'm trying to connect with 3.3.1 client (no packages available for squeeze anymore for some reason)
18:26 semiosis no one asked for squeeze packages, is the reason
18:27 partner oh
18:27 semiosis i'll see if i can build for squeeze.  if it goes well i'll upload a squeeze repo
18:27 partner i did once at least, squeeze is not that old anyways and we have some 300+ of those out there, can't upgrade just like that
18:27 semiosis kinda old though
18:27 semiosis heh ok
18:27 semiosis i'll see what i can do
18:27 semiosis and btw 3.4.1 is out now
18:28 partner not that i need gluster on all of them but you probably understand its not that easy to go out and upgrade everything in one go. i wish it was but its not :)
18:28 semiosis pastie your full client log
18:28 partner sure, please remind me about the proper pastebin, fpaste or something? can't remember the bot command
18:29 semiosis ,,(paste)
18:29 glusterbot For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
18:29 semiosis pastie.org is nice
18:29 partner thanks
18:29 semiosis i dont really care
18:33 partner http://pastie.org/8393039
18:33 glusterbot Title: #8393039 - Pastie (at pastie.org)
18:36 semiosis partner: well nothing jumps out at me
18:36 partner oh, i managed to miss first line..
18:36 partner [2013-10-10 15:57:33.441114] I [glusterfsd.c:1666:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.1
18:36 semiosis i've never tried mixing these versions
18:37 semiosis that line is in your pastie
18:37 partner this is the first time i'm attempting too, basically due to i don't have any more fresh package at hand for squeeze. i haven't checked out yet if there are still source and dsc files available so i could build easily myself
18:37 partner oh, doublefail then :)
18:37 davinder joined #gluster
18:37 partner its half past nine here already but need to work according to your timezone :)
18:39 partner this was set up today for a demo tomorrow, i assumed (according to all docs around) they work together just fine but no.
18:41 bronaugh joined #gluster
18:41 bronaugh uh guys. gluster 3.4.
18:41 bronaugh I try to copy a directory, and it fails.
18:41 bronaugh depends on whether the file has previously been opened. if previously opened, it's fine. if not, it's not.
18:42 bronaugh seems to be a race.
18:42 bronaugh so yeah. PLZ TO FIX YOUR SHIT.
18:43 johnmark bronaugh: ha. did you file a bug?
18:43 glusterbot http://goo.gl/UUuCq
18:44 bronaugh to be honest, I think I'm about done.
18:44 abradley joined #gluster
18:45 bronaugh the gluster crew shipped out a release that is pretty fundamentally broken with our configuration. that's pretty egregious.
18:46 partner may i ask what is your setup, i just put the first 3.4.0 setup up? just to know where i might bump into?
18:46 bronaugh for i in `find . -name "*"`; do sleep 0.1; grep "foo" $i; done
18:46 bronaugh succeeds.
18:46 bronaugh greep -r "foo" <dir>
18:46 bronaugh fails.
18:46 bronaugh grep*
18:47 bronaugh it's a race.
18:48 shane_ fails = hangs?
18:48 bronaugh cp: cannot open `climdex.pcic/.Rbuildignore' for reading: No such file or directory
18:48 bronaugh for example.
18:50 zerick joined #gluster
18:50 dbruhn bronaugh are you seeing errors in your logs?
18:50 kkeithley_ curiously enough we haven't heard about anything like this from anyone else. Maybe you could _nicely_ tell us, e.g. what Linux you're on, which kernel, whether you built gluster from source yourself or where you got your rpms or .debs from, etc.
18:51 bronaugh dbruhn: nothing in the brick log.
18:51 dbruhn I would assume it would be in the client logs
18:51 bronaugh eg: /var/log/glusterfs/datasets-projects.log ?
18:52 bronaugh kkeithley_: Linux atlas 3.9.1 #3 SMP Wed May 8 11:18:15 PDT 2013 x86_64 GNU/Linux
18:52 dbruhn do you have one at /var/log/glusterfs/mnt-*.log?
18:52 bronaugh kkeithley_: deb packages, from http://download.gluster.org/pub/glus​ter/glusterfs/3.4/LATEST/Debian/apt
18:53 glusterbot <http://goo.gl/ge3urI> (at download.gluster.org)
18:53 bronaugh dbruhn: that's that log; nothing in it.
18:54 bronaugh kkeithley_: this is sitting on top of zfs.
18:54 bronaugh zfsonlinux 0.6.1-1
18:54 dbruhn what about the glistered log
18:54 dbruhn glusterhd
18:55 vpshastry left #gluster
18:55 bronaugh (packages from zfsonlinux.org)
18:55 bronaugh dbruhn: no such log file.
18:56 nullck joined #gluster
18:56 bronaugh http://pastebin.mozilla.org/3231969
18:56 glusterbot Title: Mozilla Pastebin - collaborative debugging tool (at pastebin.mozilla.org)
18:56 bronaugh listing of log dir.
18:58 bronaugh I've checked every log and there's nothing even semi-relevant in any of them (ie: timestamps for last events are 12 hours ago or more)
18:58 partner semiosis: it would be enough just to get the dsc and sources & stuff and i can easily build myself. can't use jessie version of 3.4.1 due to multiarch dependency
18:59 RameshN joined #gluster
18:59 partner i recall last time jeff uploaded them somewhere for 3.3.1 or so but can't find any of that stuff right now
18:59 partner (might have been someone else too, pardon me if remember incorrectly)
19:00 johnmark bronaugh: so... we've never stated support for zfs on linux
19:00 johnmark in fact, if you're using the fuse-loaded ZFS, that would be a huge perf suck
19:01 bronaugh 11:54 < bronaugh> zfsonlinux 0.6.1-1
19:01 johnmark bronaugh: but there are reports of it wokring "in the wild"
19:01 bronaugh furthermore, gluster depends on extended attributes and pretty much nothing else.
19:01 johnmark bronaugh: one of the zfs on linux projects works via vuse
19:01 johnmark er fuse
19:01 johnmark and the other loads hte zfs module directly into the kernel
19:02 bronaugh so unless zfsonlinux's extended attributes are completely broken (they'd really have to be) then this is a gluster problem, and the filesystem is not relevant
19:02 johnmark bronaugh: I forget which is which, but suffice it to say, you're in murky terrirotry
19:02 johnmark er territory
19:02 johnmark bronaugh: what I do know is that there are reports "in the wild" of zfs working with GlusterFS
19:02 johnmark but I've never tried it, personally
19:02 l0uis I am no zfs/gluster expert, but the folks in channel who've reported success w/ zfs have all reported having to tweak several things to make it work well. At least that is what I remember from my last conversation w/ one of them.
19:02 johnmark l0uis: correct
19:02 l0uis i remember specifically something about xattr=sa or something.
19:03 johnmark I think there might be a howto on that
19:03 * johnmark checks
19:03 l0uis TBH, Unless you're running a really, really, really large cluster, why risk it w/ zfs when you know 99% of all gluster workloads are xfs backed.
19:03 bronaugh already using xattr=sa
19:03 johnmark http://www.gluster.org/community/do​cumentation/index.php/GlusterOnZFS
19:03 glusterbot <http://goo.gl/BG4Bv> (at www.gluster.org)
19:03 l0uis anyway, I'll shaddup now.
19:03 johnmark bronaugh: did you see that?
19:04 bronaugh uh, why the hell would you disable the ZIL?
19:05 johnmark bronaugh: don't ask me :) that's a doc written up by a guy who's used them together
19:05 bronaugh there's no justification for that in the documentation that's written -- ie, magic.
19:06 bronaugh anyhow, I'm going for lunch. after lunch, I'm going to look at possible race conditions with zfs and xattrs because that's the only thing I can think of that might be happening here. I'll also strace glusterd to see whether any particular system calls fail.
19:24 H___ joined #gluster
19:24 H__ joined #gluster
19:25 RameshN joined #gluster
19:29 quique_ joined #gluster
19:45 dtyarnell joined #gluster
19:47 partner hmm, any idea if 3.4.1 would bring any fixes for this 3.3.1 connectivity issue?
19:56 bronaugh ok, so this is fun. grep -r "foo" <dir>
19:56 bronaugh repeatedly says "no such file or directory"
19:57 bronaugh stracing reports that it can't find files in .glusterfs/<uuid>
19:57 bronaugh uuidpath; whatever.
19:59 bronaugh (says "no such file or dir" for files within the dir)
19:59 tryggvil joined #gluster
20:00 johnmark bronaugh: doh
20:00 johnmark bronaugh: it appears that we need a zfsonlinux domain expert to explain whatever differences are there
20:01 johnmark because it's clear there are some differences wrt configuration
20:01 johnmark not sure why you're running into them where others haven't, but something itsn't right
20:06 purpleidea fwiw i know that theron is using gluster on zfs.
20:07 bronaugh johnmark: the files don't actually exist; so it's unlikely to be ZFS related.
20:11 B21956 joined #gluster
20:16 quique_ how do i find out which ips i have allowed?
20:19 shane_ gluster peer status?
20:20 purpleidea quique_: what do you mean ?
20:20 quique_ I'm troubleshooting and i want to check if i have allowed only certain ips
20:21 quique_ with auth.allow
20:21 quique_ where is that info?
20:23 glusterbot New news from newglusterbugs: [Bug 1017176] Until RDMA handling is improved, we should output a warning when using RDMA volumes <http://goo.gl/viLDZq>
20:26 shane_ quique_: have you tried looking in the vol files? http://gluster.org/pipermail/glus​ter-users/2011-April/007429.html
20:26 glusterbot <http://goo.gl/lYHy53> (at gluster.org)
20:29 aliguori joined #gluster
20:29 quique_ shane_: i'll try that
20:30 quique_ so when mounting i get this on the client: http://pastebin.com/NfsJhJZs and this on the server: http://pastebin.com/AEhgqmFc
20:30 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:30 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:31 semiosis quique_: gluster volume info
20:32 quique_ semiosis: doesn't say anything about allowed, i assume that means I haven't limited it?
20:32 semiosis sounds right
20:33 semiosis those pastes dont make any sense to me... maybe if you pasted more lines of context, idk
20:34 shane_ semiosis: ah, so any non default options will show up under "Options Reconfigured" in volume info?
20:34 semiosis shane_: yep
20:34 shane_ good to know, thanks
20:34 semiosis even if they're changed back to default values
20:35 quique_ semiosis: http://fpaste.org/45909/13814373/ ,_server output
20:35 glusterbot Title: #45909 Fedora Project Pastebin (at fpaste.org)
20:35 al joined #gluster
20:35 semiosis quique_: what version of glusterfs is this?  what distro?
20:36 quique_ centos 6
20:36 quique_ gluster 3.4
20:36 quique_ 3.4.0
20:37 quique_ the client paste was the whole thing from:  mount -t glusterfs -o log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs
20:37 semiosis maybe if you didnt set log-level, leaving at default (info) we'd get more useful stuff
20:44 quique_ ok figured it out
20:44 quique_ the client was running gluster 3.2.7
20:44 quique_ uninstalled
20:45 semiosis nice
20:45 quique_ and put the gluster repo in
20:45 quique_ and installed 3.4
20:45 quique_ worked
20:45 quique_ thanks
20:52 partner is it just me or is the download site timing out?
20:52 semiosis http://downforeveryoneorjustme​.com/download.gluster.org/pub/
20:52 glusterbot <http://goo.gl/x81dxo> (at downforeveryoneorjustme.com)
20:52 semiosis it's just you
20:54 partner This web site has been reported as harmful.
20:54 partner We recommend that you do not visit this web site.
20:55 partner :)
20:55 partner for the above link, not for the download..
20:55 partner but i guess its just me then
20:55 semiosis heh
20:55 semiosis weird
20:56 bronaugh kkeithley_: ok, so question for you. what testing do you do for the use case of "I want to create a storage brick out of an existing filesystem containing files"?
20:57 partner ohwell, i give up, these just don't work together, i was trying to find some release notes and many links pointed to download-site
20:57 partner nvm, github worked
20:59 partner it also says they should work.. there are no 3.4 specific options used ("If a volume option that corresponds to 3.4 is enabled, then 3.3 clients cannot mount the volume.")
20:59 partner also suspected if the port change would do any difference (firewall in between) but can't find a single drop..
21:00 partner oh, there is probably another one in between, too.. better check it too
21:02 quique_ semiosis: so i have /mnt/gluster and a gluster volume is mounted on there with just mount -t glusterfs server:/mount /mnt/gluster, no options given.  the perms of /mnt/gluster are 700.  that wasn't right, so i changed them to 755, and after a bit they changed back on their own to 700.  mtab shows: rw,default_permissions,all​ow_other,max_read=131072, could it be the default_permissions changing it back?
21:03 tryggvil joined #gluster
21:07 partner nope..
21:10 partner ok, i give up, they just don't work together, that also pretty much screwes up my planned upgrade path
21:13 purpleidea quique_: sorry i was afk. in the volume info (eg: gluster volume info foobar) you'll see all the params that are not set to defaults at the bottom, eg: http://ur1.ca/fvahn
21:13 glusterbot Title: #45930 Fedora Project Pastebin (at ur1.ca)
21:13 semiosis quique_: not sure about that.  perms for the mount should be set by the bricks
21:14 quique_ semiosis: how? just whatever they are on the bricks will be what they are?
21:15 quique_ purpleidea: thanks for the visual
21:16 failshell im looking at the upgrade path from 3.3.0 to 3.4.0. it seems we need to bring down the entire cluster, unmount all the clients then upgrade.
21:16 failshell anyone can comment on that?
21:18 purpleidea quique_: np
21:19 purpleidea quique_: i always plug @puppet which has an interface to manage all of these btw: https://github.com/purpleidea/puppet-gluster/​blob/master/manifests/volume/property/data.pp
21:19 glusterbot <http://goo.gl/UKQeYP> (at github.com)
21:19 purpleidea quique_: i always plug ,,(puppet) which has an interface to manage all of these btw: https://github.com/purpleidea/puppet-gluster/​blob/master/manifests/volume/property/data.pp
21:19 glusterbot <http://goo.gl/UKQeYP> (at github.com)
21:19 glusterbot quique_: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
21:20 purpleidea thanks glusterbot, you're cool
21:23 quique_ @purpleidea cool is that module functional?
21:30 purpleidea quique_: yep!
21:30 purpleidea quique_: there are a few edge cases i haven't test recently, but if you have problems, let me know so i can patch them.
21:30 purpleidea s/test/tested
21:32 purpleidea quique_: docs and examples could also use a bit of updating :P
21:37 semiosis quique_: for example, say i start with a new volume, where brick paths are owned by root with 0644 perms, then i mount the volume on a client and chmod the mountpoint 0777.  now the brick dirs all change to 0777 and that is visible on all other clients too
21:45 bronaugh ok, so it's definitely a readdir() call immediately followed by a read of a file that came from the readdir call that causes this.
21:45 bronaugh kkeithley_, dbruhn, johnmark - got that?
21:47 kkeithley_ yup, got it
21:49 bronaugh put in a 1s delay, no problem.
21:49 bronaugh no delay, problem.
21:54 quique_ semiosis: ok it was puppet on my gluster nodes changing the perms on the bricks directly
21:54 semiosis :)
21:55 * semiosis shakes fist at puppet
21:55 kkeithley_ bronaugh: file a (,,bugreport)
21:55 semiosis kkeithley_: just file a bug
21:55 glusterbot http://goo.gl/UUuCq
21:55 bronaugh kkeithley_: it's obviously specific to this configuration if it's not reproduceable, and I presume that something like this would have come out in testing.
21:55 bronaugh so it's something spehshul.
21:56 kkeithley_ you should not presume that we test zfs
21:56 kkeithley_ test with zfs
21:56 bronaugh I've straced glusterfsd
21:56 bronaugh it makes a different sequence of system calls.
21:56 bronaugh depending on which path is taken.
21:57 bronaugh which strongly suggests, considering the results of those calls are the same, that this is glusterfs, not zfs.
21:58 semiosis bronaugh: just a wild shot in the dark here, but you may want to disable some performance xlators
21:58 bronaugh semiosis: might help to narrow it down, yeah.
21:58 semiosis see ,,(undocumented options) in addition to output of 'gluster volume set help' for ideas
21:58 glusterbot Undocumented options for 3.4: http://goo.gl/Lkekw
22:01 phox semiosis: I've never seen documentation on how to find out what the default behaviours of all of those and of the documented ones are
22:01 semiosis can't see what doesnt exist ;)
22:02 phox but there must be a value its using in lieu of a configured one.
22:02 phox e.g. ZFS shows you settings and has - as 'source' if it wasn't set by anyone
22:02 semiosis clearly, it's in the source, though maybe not docs
22:02 phox it's the sort of thing that should be queryable :P
22:06 bronaugh right; so this sounds like the result of READDIRPLUS being cached to avoid repeated lstat() calls.
22:06 bronaugh and that data being, for whatever reason, problematic.
22:06 bronaugh kkeithley_: any ideas why that could happen? or do you know who -could- answer that?
22:07 bronaugh semiosis: you too :)
22:11 kkeithley_ Most of the developers are probably asleep, it's about 3:30 am in Bangalore right now.  The rest are here and free to weigh in if they wish
22:12 bronaugh ok; thanks.
22:13 mooperd_ joined #gluster
22:14 phox yeah, it's the night shift, they're making Joe Fresh clothing right now.
22:47 StarBeast joined #gluster
23:00 mtanner joined #gluster
23:08 andreask joined #gluster
23:09 dtyarnell joined #gluster
23:11 andreask joined #gluster
23:20 Technicool joined #gluster
23:54 glusterbot New news from newglusterbugs: [Bug 1017993] gluster processes call call_bail() at high frequency resulting in high CPU utilization <http://goo.gl/eWncSv>
23:57 StarBeast joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary