Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 MeatMuppet left #gluster
00:19 badone joined #gluster
00:21 bala joined #gluster
00:21 mtanner joined #gluster
00:21 yinyin joined #gluster
00:27 naveed joined #gluster
00:28 jag3773 joined #gluster
00:34 ira joined #gluster
00:44 Honghui joined #gluster
00:48 jcsp joined #gluster
00:49 ninkotech_ joined #gluster
00:52 Ark joined #gluster
01:08 ndevos cvdyoung: you can set storage.owner-uid and storage.owner-gid for the volume, but I thought that the reset of owner/groups was fixed a while ago already...
01:14 badone joined #gluster
01:15 coredump joined #gluster
01:17 ndevos okay, that is in fact bug 1040275, fixed in master and 3.4...
01:17 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1040275 high, high, ---, vbellur, CLOSED CURRENTRELEASE, Stopping/Starting a Gluster volume resets ownership
01:19 ninkotech_ joined #gluster
01:21 ninkotech__ joined #gluster
01:33 ninkotech__ joined #gluster
01:34 DV joined #gluster
01:39 sjm joined #gluster
01:42 chirino joined #gluster
01:45 ninkotech_ joined #gluster
01:50 ndevos JoeJulian, ernetas, cvdyoung: bug 1095971 will be used to track the fix for that issue, 3.5.1 will contain the patch that addresses it
01:50 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1095971 high, high, ---, ndevos, POST , Stopping/Starting a Gluster volume resets ownership
01:50 ninkotech__ joined #gluster
01:52 gmcwhistler joined #gluster
01:54 purpleidea joined #gluster
01:54 purpleidea joined #gluster
01:55 ninkotech_ joined #gluster
02:00 Honghui joined #gluster
02:01 hagarth joined #gluster
02:07 ninkotech_ joined #gluster
02:12 gdubreui joined #gluster
02:13 badone joined #gluster
02:13 chirino_m joined #gluster
02:13 ninkotech_ joined #gluster
02:18 DV joined #gluster
02:35 ninkotech_ joined #gluster
02:35 haomaiwang joined #gluster
02:36 bharata-rao joined #gluster
02:37 harish joined #gluster
02:40 ninkotech_ joined #gluster
02:50 ira joined #gluster
02:52 ceiphas_ joined #gluster
02:57 badone joined #gluster
02:59 Honghui joined #gluster
03:00 DV joined #gluster
03:04 ninkotech__ joined #gluster
03:04 hchiramm_ joined #gluster
03:07 ira joined #gluster
03:11 shubhendu joined #gluster
03:21 ira joined #gluster
03:22 naveed joined #gluster
03:32 itisravi joined #gluster
03:32 RameshN joined #gluster
03:32 davinder joined #gluster
03:32 kanagaraj joined #gluster
03:51 nishanth joined #gluster
04:12 rejy joined #gluster
04:15 Ark joined #gluster
04:26 ndarshan joined #gluster
04:26 mohan__ joined #gluster
04:36 nshaikh joined #gluster
04:38 lalatenduM joined #gluster
04:40 atinmu joined #gluster
04:45 gdubreui purpleidea, ping - This is a ring-ring
04:45 deepakcs joined #gluster
04:52 Honghui joined #gluster
04:56 bala joined #gluster
05:01 aviksil joined #gluster
05:06 ppai joined #gluster
05:13 kumar joined #gluster
05:13 kdhananjay joined #gluster
05:22 prasanthp joined #gluster
05:31 lalatenduM joined #gluster
05:34 vpshastry joined #gluster
05:37 kanagaraj joined #gluster
05:44 dusmant joined #gluster
05:46 raghu joined #gluster
05:48 psharma joined #gluster
05:51 rjoseph joined #gluster
05:53 Philambdo joined #gluster
05:54 surabhi joined #gluster
06:10 meghanam joined #gluster
06:10 meghanam_ joined #gluster
06:12 nueces joined #gluster
06:12 nshaikh joined #gluster
06:14 ramteid joined #gluster
06:20 rahulcs joined #gluster
06:23 kanagaraj joined #gluster
06:24 dusmant joined #gluster
06:24 Honghui joined #gluster
06:29 aravindavk joined #gluster
06:33 ktosiek joined #gluster
06:33 rjoseph joined #gluster
06:36 glusterbot New news from newglusterbugs: [Bug 1096020] NFS server crashes due to invalid memory reference in rpc_get_program_vector_sizer <https://bugzilla.redhat.co​m/show_bug.cgi?id=1096020>
06:40 cppking joined #gluster
06:41 cppking Hello guys, I have a question
06:42 14WACTJBR joined #gluster
06:43 cppking glusterfs has no metadata server, If one node of gluster has a power failure, Is there problem that losing data?
06:45 ekuric joined #gluster
06:47 ramteid cppking: from my experience no, but I'm a newbie and only used glusterfs with replication for now
06:47 naveed joined #gluster
06:48 ramteid cppking: But I assume you mean during write or so... can't tell to be honest
06:52 d-fence joined #gluster
06:54 mtanner_ joined #gluster
06:55 ctria joined #gluster
06:56 cppking ramteid: hi man, what's your local time
06:56 ramteid cppking: 08:56 CEST :)
06:59 premera joined #gluster
06:59 hchiramm__ joined #gluster
07:05 badone joined #gluster
07:07 rahulcs joined #gluster
07:08 d-fence joined #gluster
07:12 cppking left #gluster
07:12 cppking joined #gluster
07:21 ngoswami joined #gluster
07:24 eseyman joined #gluster
07:26 badone joined #gluster
07:30 harish joined #gluster
07:35 glusterbot New news from newglusterbugs: [Bug 1096020] NFS server crashes in _socket_read_vectored_request <https://bugzilla.redhat.co​m/show_bug.cgi?id=1096020> || [Bug 1096047] [barrier] ls --color gets blocked on the fuse mount, with O_SYNC writes unless barrier was disabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1096047>
07:37 rahulcs joined #gluster
07:39 fsimonce joined #gluster
07:53 rahulcs joined #gluster
07:54 andreask joined #gluster
07:55 andreask joined #gluster
07:59 stickyboy joined #gluster
08:00 ceiphas is it possible to use a gluster volume as root partition?
08:01 hybrid512 joined #gluster
08:10 hybrid512 joined #gluster
08:13 andreask1 joined #gluster
08:13 andreask joined #gluster
08:16 liquidat joined #gluster
08:19 sprachgenerator joined #gluster
08:20 ramteid ceiphas: since glusterfs support NFS and one can put root on NFS, I would say yes (https://www.kernel.org/doc/Document​ation/filesystems/nfs/nfsroot.txt)
08:21 ceiphas ramteid: but with fuse mount?
08:21 ramteid ceiphas: if you put the proper modules in initrd why not?
08:23 ramteid ceiphas: http://lists.gnu.org/archive/html/g​luster-devel/2007-11/msg00070.html
08:23 glusterbot Title: Re: [Gluster-devel] glusterfs as root filesystem (at lists.gnu.org)
08:25 Honghui__ joined #gluster
08:42 MrAbaddon joined #gluster
08:46 Honghui__ joined #gluster
08:47 rahulcs joined #gluster
08:53 ceiphas which format options are good for a brick with xfs?
08:53 ceiphas anything that should be set?
09:03 naveed joined #gluster
09:03 ppai joined #gluster
09:13 andreask joined #gluster
09:13 ceiphas how do i configure auth.allow to just allow two hosts to connect to the volume? is it comma separated or colon, or semicolon?
09:19 hagarth joined #gluster
09:23 Dave2 joined #gluster
09:28 Honghui__ joined #gluster
09:36 rahulcs joined #gluster
09:37 meghanam joined #gluster
09:37 meghanam_ joined #gluster
09:37 glusterbot New news from resolvedglusterbugs: [Bug 1054668] One brick always has alot of entries on `gluster volume heal gv0 info` <https://bugzilla.redhat.co​m/show_bug.cgi?id=1054668>
09:40 tryggvil joined #gluster
09:42 rahulcs joined #gluster
09:50 sac`away joined #gluster
09:50 Honghui__ joined #gluster
09:56 andreask joined #gluster
09:57 kanagaraj joined #gluster
10:04 rahulcs joined #gluster
10:12 tryggvil joined #gluster
10:15 surabhi joined #gluster
10:16 ppai joined #gluster
10:17 tryggvil_ joined #gluster
10:19 rahulcs joined #gluster
10:35 ira joined #gluster
10:37 Dasberger joined #gluster
10:50 itisravi_ joined #gluster
10:52 Ark joined #gluster
11:02 kkeithley1 joined #gluster
11:02 shubhendu_ joined #gluster
11:03 rwheeler joined #gluster
11:06 naveed joined #gluster
11:13 kanagaraj joined #gluster
11:14 RameshN joined #gluster
11:20 dusmant joined #gluster
11:20 nishanth joined #gluster
11:21 prasanthp joined #gluster
11:21 bala joined #gluster
11:23 andreask joined #gluster
11:29 jcsp joined #gluster
11:29 MrAbaddon joined #gluster
11:44 d3vz3r0 joined #gluster
11:51 gdubreui joined #gluster
11:58 chirino joined #gluster
11:58 tryggvil joined #gluster
12:10 morse joined #gluster
12:11 B21956 joined #gluster
12:14 jmarley joined #gluster
12:14 jmarley joined #gluster
12:18 rahulcs joined #gluster
12:24 Ark joined #gluster
12:31 ndarshan joined #gluster
12:43 jmarley joined #gluster
12:43 jmarley joined #gluster
12:51 sroy joined #gluster
12:54 plarsen joined #gluster
13:02 MrAbaddon joined #gluster
13:02 rahulcs joined #gluster
13:02 bennyturns joined #gluster
13:03 dbruhn joined #gluster
13:04 scuttle_ joined #gluster
13:06 Scott6 joined #gluster
13:07 shilpa joined #gluster
13:09 dusmant joined #gluster
13:12 primechuck joined #gluster
13:13 rahulcs joined #gluster
13:17 rahulcs joined #gluster
13:24 naveed joined #gluster
13:29 mjsmith2 joined #gluster
13:31 harish joined #gluster
13:35 japuzzo joined #gluster
13:39 cvdyoung Hi, is there a place where I could view all of the gluster volume options for 3.5?  Also, is there a command to show me a list of the options currently set on my volume?  gluster volume info shows me the latest ones.  Thank you
13:39 sjm joined #gluster
13:40 ProT-0-TypE joined #gluster
13:42 jcsp joined #gluster
13:47 liquidat joined #gluster
13:55 theron joined #gluster
13:56 lmickh joined #gluster
14:04 jobewan joined #gluster
14:13 wushudoin joined #gluster
14:17 failshell joined #gluster
14:24 cvdyoung To correct from split-brain, is there an easier way of fixing it, because I see a recovery url http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/   with a python script that may be useful.  Any thoughts on this site/procedure?
14:24 glusterbot Title: GlusterFS Split-Brain Recovery Made Easy (at joejulian.name)
14:27 Georgyo joined #gluster
14:31 JonathanD joined #gluster
14:35 sprachgenerator joined #gluster
14:37 [o__o] joined #gluster
15:03 LoudNoises joined #gluster
15:05 coredump joined #gluster
15:09 daMaestro joined #gluster
15:14 chirino_m joined #gluster
15:15 ghenry anyone see my post to the mailing list about du and the time it take on a mount fuse flusterfs via a du on a brick? Both done on the same box
15:24 liammcdermott joined #gluster
15:29 naveed joined #gluster
15:33 liammcdermott Quick support question, please: when I run 'gluster volume set VOLNAME OPTION PARAMETER' is the change permanent?
15:35 kkeithley_ yes. you can verify/confirm this by looking at the volfile to see that the option has been added.
15:36 liammcdermott Thanks!
15:36 Psi-Jack_ joined #gluster
15:40 purpleidea @tell later gdubreui you dissapeared
15:40 glusterbot purpleidea: Error: I haven't seen later, I'll let you do the telling.
15:41 purpleidea @later tell gdubreui you dissapeared
15:41 glusterbot purpleidea: The operation succeeded.
15:46 jag3773 joined #gluster
15:50 scuttle_ joined #gluster
15:53 eseyman joined #gluster
15:56 JoeJulian cvdyoung: Well I think it's a pretty useful tool. :)
15:57 cvdyoung LOL nice
15:59 cvdyoung JoeJulian, Will it detect the type of split-brain, file, Data, Metadata, or Entry type, and correct as necessary?
15:59 cvdyoung <-- Not well versed in python...
15:59 sprachgenerator joined #gluster
16:00 JoeJulian No, it splits the replica into separate mounts so you can delete the one that's "bad" without having to worry about gfid or any of that jazz.
16:00 JoeJulian There is no automated way to recover from split-brain. There's no way for a program to know which change was intended.
16:05 ramteid joined #gluster
16:14 sjm joined #gluster
16:14 sjm joined #gluster
16:15 G________ joined #gluster
16:15 G________ Hey Guys :)
16:16 Mo___ joined #gluster
16:16 cvdyoung JoeJulian: When I run the script it says it works, but I don't see anything mounted to /mnt/r*.
16:17 CyrilP gfids entries on info heal-failed (no files, only gfids entries), how to deal with ?
16:17 ramteid CyrilP: https://gist.github.com/semiosis/4392640 ?
16:17 glusterbot 'Title: Glusterfs GFID Resolver\r \r Turns a GFID into a real path in the brick (at gist.github.com)'
16:18 CyrilP ramteid sounds what I need :)
16:18 CyrilP I'll give a try
16:19 ramteid CyrilP: yw... (it worked for me)...
16:20 CyrilP ok this script seems to look for file pointing on same inode
16:20 ramteid CyrilP: yes.
16:20 CyrilP with around 30TB of data, it will take ages :/
16:20 ramteid CyrilP: indeed :(
16:21 haomaiwa_ joined #gluster
16:21 ramteid CyrilP: did you try "heal" ?
16:21 CyrilP Nevermind I'll give a try
16:21 CyrilP yep
16:21 ramteid CyrilP: I see :(
16:21 CyrilP I have only 1030 heal-failed on one of my 2 nodes
16:21 CyrilP only gfid are listed not files
16:21 CyrilP I dunno really why
16:22 ramteid CyrilP: good question, I can't answer, I just recall that I had that issue also several times. although not sure if this happend only on older glusterfs version....
16:22 ramteid CyrilP: B/c haven't seen it lately
16:23 JoeJulian cvdyoung: great... Can you look in /var/log/glusterfs/$tempname.vol.log (where $tempname is random but should be the same as the volfiles in /mnt) and fpaste it?
16:23 CyrilP 3.4.2 is not that older
16:23 ramteid CyrilP: I see... and you're right of course
16:24 shilpa joined #gluster
16:24 ramteid CyrilP: hmm, you said you started heal process... as far as I understand that can take some time also... especially with 30 TB....
16:24 zerick joined #gluster
16:25 CyrilP sure... and as I can't see if it's finish or not or the healing state this doesn't help
16:25 ramteid CyrilP: Do you use glusterfs via NFS or fuse?
16:25 CyrilP both
16:26 ramteid CyrilP: I see, I stopped using NFS b/c of "too many" problems...
16:27 ramteid CyrilP: regarding heal and finished... good question.. unfortunately not sure about that either
16:27 CyrilP so
16:28 CyrilP let say I have a GFID entry in vol2 when running info heal-failed (only in vol2).
16:28 CyrilP I use your script to find the file associated
16:28 CyrilP this file exist on all nodes
16:28 CyrilP and getfattr are the same on all nodes too
16:29 Amanda joined #gluster
16:29 CyrilP why this entry is still on the heal-failed list ?
16:29 ramteid CyrilP: b/c it's raining.. SCNR
16:29 CyrilP :p
16:30 ramteid CyrilP: unfortunately I can't tell (you guessed already).... I also had sometimes odd failed entries....
16:30 ramteid CyrilP: In my case it happend if that file has g+s (chmod)
16:31 CyrilP weird
16:31 ramteid CyrilP: which makes no sense b/c if I created a different file by hand it worked
16:31 ramteid CyrilP: indeed :)
16:31 CyrilP I will try to remove one fil from one brick and let healing process run
16:32 ramteid CyrilP: yes, but if you are paranoid copy it somewhere save and also delete the entry .gluster
16:32 ramteid (.glusterfs)
16:36 haomaiwang joined #gluster
16:41 tryggvil joined #gluster
16:49 CyrilP well it doesn't change anything, gfid/file a resync between nodes but the entry is still in heal-failed
16:49 jbd1 joined #gluster
16:50 ramteid CyrilP: just to be sure you deleted file _and_ referenced file in .glusterfs ?
16:50 CyrilP yes
16:50 ramteid :(
16:50 CyrilP few secondes later there were recreated
16:50 ramteid CyrilP: that would be fine
16:50 CyrilP but I still have the fucking entry in heal-failed
16:50 ramteid CyrilP: but can't tell why still broken
16:50 ramteid CyrilP: anything useful self heal log?
16:51 ramteid CyrilP: or you can try something "funny", copy file somewhere, delete it on vol, copy back
16:52 CyrilP As everything is working fine, I will give up this
16:52 CyrilP time consuming
16:58 CyrilP replicate-0: background  meta-data data entry self-heal failed on what doest it mean
16:59 [o__o] joined #gluster
17:00 ramteid CyrilP: unfortunately I have no idea
17:02 aviksil joined #gluster
17:11 chirino joined #gluster
17:12 [o__o] joined #gluster
17:22 CyrilP @JoeJulian: Hej, any idea for "background  meta-data data entry self-heal failed" ?
17:23 JoeJulian The only way to find out why the heal failed would be to look through the glustershd.log for that heal attempt.
17:24 JoeJulian Also be aware that command shows a log of entries which never gets cleared if the entry is subsequently healed.
17:26 CyrilP like : "remote operation failed: No such file or directory. Path: <gfid:4c429f96-0ff7-4c32-848b-0eaa15ee1c5c>"
17:26 CyrilP but the file / GFID exists on both bricks...
17:28 CyrilP by "Also be aware that command shows a log of entries which never gets cleared if the entry is subsequently healed.", you mean that if it failed once but got healed the second time, the entry will still appear in info heal-failed ?
17:32 arya joined #gluster
17:34 hagarth joined #gluster
17:47 daMaestro joined #gluster
17:51 zaitcev joined #gluster
18:08 estefanycanaima7 joined #gluster
18:11 MeatMuppet joined #gluster
18:12 sputnik13 joined #gluster
18:12 naveed joined #gluster
18:12 MeatMuppet Can new bricks be added to extend a volume while the volume is healing?
18:13 estefanycanaima7 ola como estas
18:14 estefanycanaima7 left #gluster
18:18 theron joined #gluster
18:20 ktosiek joined #gluster
18:23 diegows joined #gluster
18:33 dusmant joined #gluster
18:35 primechuck joined #gluster
18:36 sputnik13 joined #gluster
18:36 primechuck joined #gluster
18:37 JoeJulian MeatMuppet: When you're back from lunch, yes. Extend the volume and rebalance...fix-layout.
18:38 theron joined #gluster
18:38 JoeJulian ... and we're going to talk about design. The more I think about it the more I want smaller bricks.
18:42 primechu_ joined #gluster
18:46 gmcwhist_ joined #gluster
18:49 theron joined #gluster
18:55 ninkotech joined #gluster
18:55 ninkotech__ joined #gluster
18:56 dbruhn joined #gluster
18:58 dbruhn_ joined #gluster
18:59 hchiramm_ joined #gluster
19:01 chirino_m joined #gluster
19:09 theron joined #gluster
19:09 sputnik13 joined #gluster
19:26 maduser joined #gluster
19:31 DanishMan joined #gluster
19:34 dusmant joined #gluster
19:38 ProT-0-TypE joined #gluster
19:46 coryc joined #gluster
19:47 coryc hopefully an easy question, have already spent several hours looking but not finding a definitive answer. Is it possible to change the gluster volume log file location on a client and if so, can it be done via the .vol file?
19:49 JoeJulian Yes and no. Yes, you can change the location. No, the change is done through the mount option, "log-file".
19:49 JoeJulian coryc: ^
19:52 coryc JoeJulian:  thanks.....searching.....can that be used in fstab?
19:52 JoeJulian yes
19:53 MeatMuppet joined #gluster
19:58 coryc JoeJulian:  Can you point me to an example....I keep finding examples on doing it manually with the log-file specified and then find examples for /etc/fstab that say you can add log-file but they don't actually show where it is supposed to go
19:58 coryc ie: https://access.redhat.com/site/documentation/​en-US/Red_Hat_Storage/2.1/html/Administration​_Guide/sect-Administration_Guide-GlusterFS_Cl​ient-GlusterFS_Client-Mounting_Volumes.html
19:58 glusterbot Title: 9.2.3. Mounting Red Hat Storage Volumes (at access.redhat.com)
19:58 coryc ie: https://access.redhat.com/site/documentatio​n/en-US/Red_Hat_Storage/2.1/html/Administra​tion_Guide/sect-Administration_Guide-Gluste​rFS_Client-GlusterFS_Client-Automatic.html
19:59 glusterbot Title: 9.2.3.3. Mounting Volumes Automatically (at access.redhat.com)
19:59 coryc ie: http://gluster.org/community/documentation/index​.php/Gluster_3.1:_Automatically_Mounting_Volumes
19:59 glusterbot Title: Gluster 3.1: Automatically Mounting Volumes - GlusterDocumentation (at gluster.org)
19:59 JoeJulian "mount -o log-file=/tmp/fubar.log server1:vol1 /mnt/vol1"
20:00 coryc JoeJulian:  that would be for manually mounting?
20:00 JoeJulian in fstab: server1:vol1 /mnt/vol1 glusterfs _netdev,log-file=/tmp/fubar.log 0 0
20:01 JoeJulian I forgot the "-t glusterfs" for the manual mount. :/
20:01 theron joined #gluster
20:01 coryc this is what i'm using in my /etc/fstab: /etc/glusterfs/brick.vol /mnt/path glusterfs defaults,nobootwait,acl 0 0
20:01 JoeJulian ah, ubuntu eh?
20:02 coryc so then i should try /etc/glusterfs/brick.vol /mnt/path glusterfs defaults,nobootwait,acl,log-file=/tmp/fubar.log 0 0
20:02 JoeJulian Just add ",log-file=whatever" after acl.
20:02 JoeJulian yep
20:02 coryc JoeJulian:  yeah, ubuntu...can't change it
20:09 coryc JoeJulian:  ok, looks like that's going to work and am pretty sure I can add that to puppet so thanks for the help
20:09 JoeJulian You're welcome.
20:10 JoeJulian coryc: Yep, just add it to the string under the "options" property for "mount".
20:12 coryc JoeJulian:  the whole reason I'm having to look into this is that one of my clients has a log file that is over 1GB in size every day
20:13 * JoeJulian raises an eyebrow...
20:13 coryc just spews: 2014-05-09 20:13:05.342514] I [dict.c:370:dict_get] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.3/x​lator/performance/md-cache.so(mdc_lookup+0x308) [0x7fb0c6135638] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.3/xla​tor/debug/io-stats.so(io_stats_lookup_cbk+0x118) [0x7fb0c5f22c58] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.3/xlat​or/system/posix-acl.so(posix_acl_lookup_cbk+0x1e1) [0x7fb0c5d0dab1]))) 0-dict: !this || key=system.pos
20:13 coryc spews that every few minutes
20:14 JoeJulian Can you put that at fpaste.org? It got cut off and I'm curious about the rest of it.
20:15 JoeJulian Apparently has something to do with using acls.
20:16 coryc JoeJulian:  http://fpaste.org/100632/96665631/ pw is gluster
20:16 glusterbot Title: #100632 Fedora Project Pastebin (at fpaste.org)
20:16 coryc that's pretty cool, i'll have to remember that site
20:17 JoeJulian @paste
20:17 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
20:17 JoeJulian Even cooler. :D
20:24 coryc yep
20:25 JoeJulian coryc: afaict, this seems to have to do with ACLs not being set. The cache is trying to cache the acl reply from a lookup for those xattrs. The xattr doesn't exist in the dict so it's throwing that informational message. It could be argued that dict_get should be logging that as a debug instead of info. A quick fix would be to walk your tree and set ACLs on everything.
20:26 JoeJulian If you feel the log level is wrong, feel free to file a bug report.
20:26 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
20:27 coryc JoeJulian:  I sort of understand that.......I bet it is because that host isn't puppetized yet so it's not getting the proper ACLs from the manifest
20:27 JoeJulian Sounds likely.
20:27 coryc am hoping we can replace that host soon
20:34 xiu_ b 4
20:39 naveed joined #gluster
20:42 ktosiek_ joined #gluster
20:58 mjsmith2 joined #gluster
21:05 theron joined #gluster
21:07 ProT-0-TypE joined #gluster
21:07 jcsp joined #gluster
21:18 gmcwhist_ joined #gluster
21:23 Scott6 joined #gluster
21:23 gmcwhist_ joined #gluster
21:24 primechuck joined #gluster
21:33 coryc left #gluster
21:42 gmcwhis__ joined #gluster
21:49 arya joined #gluster
21:52 Scott6 joined #gluster
22:07 \malex\ joined #gluster
22:07 jcsp joined #gluster
22:09 anotheral joined #gluster
22:09 anotheral hi
22:09 glusterbot anotheral: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:10 anotheral Does anyone know of commercial support offerings for gluster on ubuntu?
22:13 MeatMuppet anotheral: depending on the complexity of your needs, hastexo.com may be able to help you.
22:22 purpleidea anotheral: also ping semiosis
22:33 fidevo joined #gluster
22:34 jag3773 joined #gluster
22:37 tryggvil joined #gluster
22:40 thornton joined #gluster
22:47 gmcwhist_ joined #gluster
22:48 thornton left #gluster
22:50 thornton joined #gluster
22:51 sjm joined #gluster
22:54 MeatMuppet left #gluster
23:08 naveed joined #gluster
23:12 MrAbaddon joined #gluster
23:22 ctria joined #gluster
23:26 jag3773 joined #gluster
23:29 gmcwhistler joined #gluster
23:35 gmcwhist_ joined #gluster
23:35 thornton left #gluster
23:43 badone joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary