Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-12-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 rastar joined #gluster
00:44 arpu joined #gluster
00:46 ShwethaHP joined #gluster
00:47 shdeng joined #gluster
00:49 Javezim Anyone had an issue with GlusterFS Running on ZFS, where by deleting data from Gluster pool doesn't delete it from the ZFS Bricks available space
00:49 Javezim Doing a du -csh, the data has gone, but doing a df -h shows that the data is still there
00:49 Javezim the df -h never shrinks
00:53 kramdoss_ joined #gluster
01:05 shdeng joined #gluster
01:16 plarsen joined #gluster
01:53 haomaiwang joined #gluster
02:04 phileas joined #gluster
02:14 haomaiwang joined #gluster
02:34 derjohn_mobi joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:00 kraynor5b joined #gluster
03:09 RustyB joined #gluster
03:14 haomaiwang joined #gluster
03:18 Lee1092 joined #gluster
03:20 nbalacha joined #gluster
03:23 itisravi joined #gluster
03:28 magrawal joined #gluster
03:32 wistof joined #gluster
03:42 hchiramm joined #gluster
03:43 atinmu joined #gluster
03:45 caitnop joined #gluster
03:49 riyas joined #gluster
03:51 RameshN joined #gluster
03:53 kramdoss_ joined #gluster
03:56 kdhananjay joined #gluster
04:14 haomaiwang joined #gluster
04:22 nishanth joined #gluster
04:24 farhorizon joined #gluster
04:28 mb_ joined #gluster
04:32 Shu6h3ndu joined #gluster
04:34 apandey joined #gluster
04:43 sbulage joined #gluster
04:57 buvanesh_kumar joined #gluster
05:01 prasanth joined #gluster
05:04 ankitraj joined #gluster
05:14 haomaiwang joined #gluster
05:28 Philambdo joined #gluster
05:31 om2 joined #gluster
05:38 rafi joined #gluster
05:38 aravindavk joined #gluster
05:46 Karan joined #gluster
05:48 ppai joined #gluster
05:56 lalatenduM joined #gluster
05:58 sanoj joined #gluster
06:00 jiffin joined #gluster
06:00 hchiramm joined #gluster
06:06 susant joined #gluster
06:10 edong23 joined #gluster
06:14 haomaiwang joined #gluster
06:16 Saravanakmr joined #gluster
06:17 kotreshhr joined #gluster
06:18 msvbhat joined #gluster
06:23 nbalacha joined #gluster
06:27 ShwethaHP joined #gluster
06:31 buvanesh_kumar joined #gluster
06:32 jkroon joined #gluster
06:41 ankitraj joined #gluster
06:42 ankitraj joined #gluster
06:44 gem joined #gluster
06:48 rafi1 joined #gluster
06:52 rafi joined #gluster
06:59 asriram|mtg joined #gluster
07:04 buvanesh_kumar joined #gluster
07:08 nbalacha joined #gluster
07:09 nishanth joined #gluster
07:14 haomaiwang joined #gluster
07:18 mhulsman joined #gluster
07:19 apandey joined #gluster
07:24 circ-user-oWpLl joined #gluster
07:25 jtux joined #gluster
07:42 jkroon joined #gluster
08:05 Wizek_ joined #gluster
08:14 haomaiwang joined #gluster
08:17 [diablo] joined #gluster
08:18 jri joined #gluster
08:20 ivan_rossi joined #gluster
08:35 mhulsman joined #gluster
08:39 masber joined #gluster
08:53 hackman joined #gluster
08:55 rastar joined #gluster
09:02 poornima joined #gluster
09:14 haomaiwang joined #gluster
09:20 glst joined #gluster
09:25 gem joined #gluster
09:30 JimmyZhang joined #gluster
09:31 sona joined #gluster
09:31 Slashman joined #gluster
09:32 sona joined #gluster
09:33 JimmyZhang In replicate mode? Is glusterfs support posix lock sync? I observed in glusterfs 3.6 release, while a client take posix lock of glusterfs file while one server up/one server down. Then lock will not sync to that problem server when it become online again.
09:34 rastar joined #gluster
09:39 JimmyZhang 0)Precondition, glusterfs server in replicate mode:     server-0 online     server-1 online  1)client-1 take write lock for file A:     server-0 online, write lock for file A held by client-1     server-1 online, write lock for file A held by client-1  2) server-1 restarted:     server-0 online, write lock for file A held by client-1     server-1 online, no lock      3) server-0 restarted:     server-0 online, no lock
09:43 Saravanakmr joined #gluster
09:46 ppai joined #gluster
09:47 asriram|mtg joined #gluster
09:48 derjohn_mobi joined #gluster
09:48 kdhananjay joined #gluster
09:56 skoduri joined #gluster
09:56 msvbhat joined #gluster
10:01 atinmu joined #gluster
10:08 shaunm joined #gluster
10:14 haomaiwang joined #gluster
10:24 bluenemo joined #gluster
10:35 greeny___ joined #gluster
10:37 poornima_ joined #gluster
10:48 asriram|mtg joined #gluster
10:50 mhulsman joined #gluster
11:00 atinmu joined #gluster
11:00 skoduri_ joined #gluster
11:02 ppai joined #gluster
11:02 Saravanakmr joined #gluster
11:04 mahendratech joined #gluster
11:09 msvbhat joined #gluster
11:10 ankitraj joined #gluster
11:12 gem joined #gluster
11:14 haomaiwang joined #gluster
11:26 kotreshhr left #gluster
11:28 kraynor5b_ joined #gluster
11:29 Philambdo joined #gluster
11:31 gluytium joined #gluster
11:32 d0nn1e joined #gluster
11:45 Dave joined #gluster
11:55 skoduri__ joined #gluster
11:55 Gambit15 joined #gluster
12:06 derjohn_mobi joined #gluster
12:16 nobody481 joined #gluster
12:18 mhulsman1 joined #gluster
12:21 atinmu joined #gluster
12:29 rafi1 joined #gluster
12:29 mhulsman joined #gluster
12:30 apandey joined #gluster
12:31 kotreshhr joined #gluster
12:31 rafi1 joined #gluster
12:42 sona joined #gluster
12:46 p7mo joined #gluster
12:53 kdhananjay joined #gluster
13:03 kraynor5b joined #gluster
13:06 alvinstarr I am seeing lots of errors like incomplete sync, retrying changelogs: CHANGELOG.* and when I find the associated changelog it has only a single entry M00000000-0000-0000-0000-000000000001^@17^@. Is this right?
13:11 johnmilton joined #gluster
13:18 buvanesh_kumar joined #gluster
13:22 kramdoss_ joined #gluster
13:25 pjrebollo joined #gluster
13:25 jtux joined #gluster
13:26 unclemarc joined #gluster
13:29 pjrebollo I'm getting "file changed as we read it" when using GNU tar reading from Gluster volume.  The environment is CentOS 7.2, Gluster 3.8.5 on both client and server and TAR 1.26.
13:29 bartden joined #gluster
13:29 pjrebollo Any suggestion on how to debug this problem?
13:29 bartden hi, i see following statement in glusterfs logs , do i need to worry ? “-->/usr/lib64/glusterfs/3.7.5/xlator/system/pos​ix-acl.so(handling_other_acl_related_xattr+0x30) [0x7f83687ff670] -->/usr/lib64/libglusterfs.so.0(dict_get+0x63) [0x349941e563] ) 0-dict: !this || key=system.posix_acl_access [Invalid argument]”
13:29 glusterbot bartden: “'s karma is now -1
13:30 jiffin bartden: which version?
13:30 ankitraj joined #gluster
13:30 jiffin bartden: i mean gluster
13:31 bartden 3.7.5-1.el6
13:33 jiffin bartden: there should not be any functionality issue IMO,  the issue got fixed in 3.7.9
13:33 jiffin http://review.gluster.org/#/c/13452/
13:33 glusterbot Title: Gerrit Code Review (at review.gluster.org)
13:34 bartden ok thanks!
13:35 bartden I have another special case, at a certain point a user who has access rights to the files via fuse native client tries to access a file. I see a permission denied in the cluster logs. But after while It does have access. Does this ring any bells? Keep in mind, i’m using posix acl’s as well on that file , but the user who tries to access has normal linux access rights.
13:37 fsimonce joined #gluster
13:40 jiffin bartden: if u have specific (easy) steps  to reproduce issue then I can try it on my setup and let u know, Maybe it is due to some races in the code
13:41 bartden no, can’t reproduce it … it happed last week, can also be a race issue within the application …
13:42 bartden jiffin: is it logged somewhere when permissions are changed on files in gluster?.
13:43 ankitraj joined #gluster
13:46 sona joined #gluster
13:46 jiffin bartden: with normal logging I guess not it cannot be found
13:46 bartden ok thx
13:48 jiffin by default gluster run with INFO log level. If u decrease into DEBUG/TRACE may be u can see that. But it will impact the performance
13:56 rwheeler joined #gluster
13:57 poornima_ joined #gluster
13:57 Wizek_ joined #gluster
14:02 pjrebollo Any advice on TAR giving "file changed as we read it" from Gluster volume?
14:03 cloph doesn't seem gluster specific..
14:04 rafi joined #gluster
14:06 Shu6h3ndu joined #gluster
14:06 pjrebollo I't can be.  There are reported issues related to Gluster. https://bugzilla.redhat.co​m/show_bug.cgi?id=1302948
14:06 glusterbot Bug 1302948: medium, medium, ---, sabansal, CLOSED CURRENTRELEASE, tar complains: <fileName>: file changed as we read it
14:08 Philambdo joined #gluster
14:23 cvstealth joined #gluster
14:31 Gnomethrower joined #gluster
14:38 squizzi joined #gluster
14:39 B21956 joined #gluster
14:39 susant1 joined #gluster
14:40 susant1 joined #gluster
14:40 susant1 joined #gluster
14:41 susant1 joined #gluster
14:42 mhulsman joined #gluster
14:46 susant joined #gluster
14:50 lalatenduM joined #gluster
14:53 skylar joined #gluster
14:54 jiffin joined #gluster
14:55 jiffin joined #gluster
15:05 farhorizon joined #gluster
15:10 ppai joined #gluster
15:11 nbalacha joined #gluster
15:17 [o__o] joined #gluster
15:17 jpospisil_ joined #gluster
15:18 Gambit15 joined #gluster
15:27 msvbhat joined #gluster
15:39 nishanth joined #gluster
15:48 aravindavk joined #gluster
15:55 Asako joined #gluster
15:55 Asako is there a good guide for setting up the native NFS server in gluster?
15:55 susant joined #gluster
15:57 Asako do I just set nfs.disable to off?
16:01 kkeithley yes
16:01 skoduri joined #gluster
16:03 Gnomethrower joined #gluster
16:05 Philambdo joined #gluster
16:05 Asako I'm getting an rpc.mounted unmatched host error when I try to mount the volume
16:06 Asako how do I export a volume that isn't mounted?
16:08 wushudoin joined #gluster
16:08 Asako guess I have to restart the volume
16:10 cloph Asako: what do you mean export? via nfs? If so, then this is only possible when you're using ganesha
16:10 cloph if using the kernel's nfs, you can only export the mounted volume/directories
16:11 abyss^ joined #gluster
16:13 shaunm joined #gluster
16:14 Iouns joined #gluster
16:17 Asako cloph: yeah, I want to use the built-in nfs server
16:19 cloph ganesha is not built-in though, it is separate, but knows how to talk to native gluster.
16:19 Asako looks like it exports the volume to * by default
16:19 cloph for ganesha, you can limit what permissions to grant/what IPs to allow in the export definition, similar to what you would do with kernel nfsd
16:20 cloph just different file and different syntax...
16:21 Asako not a big deal right now, native client mounts would have the same access
16:21 Asako but we have older servers that can only do nfs
16:22 Asako Fedora 14 has gluster but it's like version 2.6
16:22 Asako glusterfs-client-3.0.7-1.fc14.x86_64 actually
16:24 cloph uh, that's oooold
16:24 snehring joined #gluster
16:34 rastar joined #gluster
16:34 Philambdo joined #gluster
16:35 Asako cloph: yup
16:37 Philambdo joined #gluster
16:43 plarsen joined #gluster
16:53 Karan joined #gluster
16:59 jri joined #gluster
17:05 cjb1 joined #gluster
17:06 cjb1 Hi all, I've got a strange issue where all of my geo-replication sessions have disappeared (status of), but they are still syncing
17:07 cjb1 i'm wondering if this could be because of one of the peer cluster had a DNS issue over night, and caused all the other sessions to stop reporting (the other cluster is not syncing currently, obviously). gluster, in fact, is unable to get to that remote location at all
17:08 cjb1 the other sessions to other clusters are still syncing though, so those are indeed still healthy, but no administrative commands are working
17:08 cjb1 "we" are in the process of fixing the DNS issue to see if that's the cause of the geo-replication commands not working, but I'm wondering if anyone here has ever seen anything like this before?
17:21 BitByteNybble110 joined #gluster
17:23 hchiramm joined #gluster
17:48 hchiramm joined #gluster
17:54 mhulsman joined #gluster
17:58 marko_ joined #gluster
17:59 bowhunter joined #gluster
18:03 ivan_rossi left #gluster
18:08 susant1 joined #gluster
18:08 susant joined #gluster
18:08 jiffin joined #gluster
18:09 susant joined #gluster
18:11 JoeJulian cjb1: That sounds pretty likely. The open TCP connections won't be affected by a DNS issue, but any new connections would.
18:12 susant joined #gluster
18:13 susant joined #gluster
18:22 jri joined #gluster
18:23 kotreshhr left #gluster
18:51 cjb1 @joejulian, that was the cause, to circle back for everyone else here…
18:52 ira joined #gluster
19:02 ahino joined #gluster
19:02 rwheeler joined #gluster
19:13 kenansulayman joined #gluster
19:14 mhulsman1 joined #gluster
19:16 shaunm joined #gluster
19:24 JoeJulian cjb1++
19:24 glusterbot JoeJulian: cjb1's karma is now 1
19:38 hchiramm joined #gluster
19:41 Asako I'm having an issue syncing a 1.7 TB directory over to gluster.  Are there any performance recommendations for working with large directories?
19:42 Asako all I'm seeing is a bunch of lstat calls in strace
19:43 mhulsman joined #gluster
19:44 derjohn_mobi joined #gluster
19:46 JoeJulian Don't use rsync/
19:46 JoeJulian ?
19:47 JoeJulian I kind-of like cpio for that. It's really stupid and just does one thing.
19:48 Asako yeah, I'm thinking that or tar
19:48 JoeJulian tar is the next-best. You can even set block sizes to make better use of your network.
19:51 alvinstarr If your doing your copy over a high speed high latency link you may want to take a look at bbcp
19:53 Asako rsync works it just takes hours to run
19:56 JoeJulian rsync's too smart. It does a lot of checking to make sure it does the least amount of data transfer, but that checking on a large directory can be inefficient for clustered storage.
19:56 alvinstarr JoeJulian: I am still having 0 luck getting geo-sync working out of our production gluster volume.
19:59 Asako I should just point our fedora mirror script to use the gluster dir
19:59 alvinstarr JoeJulian: the Hybird  Crawl gets part way through and then seems to stall.
20:02 mhulsman joined #gluster
20:10 JoeJulian all I can suggest is upping log levels and looking for changes. Something must be happening to cause that.
20:11 JoeJulian Maybe some state dumps might be relevant. Did you file a bug report? You should and include logs, state dumps, and anything else you can think of that might be relevant.
20:11 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
20:12 JoeJulian I'm happy to look at data and help speculate. The developers are also pretty responsive when there's sufficient data.
20:18 alvinstarr The logs are huge at this point already. I posted a set of logs to the gluster-user mailing list with little luck. I would like to submit a bug report if I only could describe a bug.
20:19 mhulsman joined #gluster
20:22 alvinstarr What could cause "failed on peer with OSError"
20:28 alvinstarr The snippet warnings and errors around that area is http://pastebin.centos.org/60051/
20:32 mhulsman joined #gluster
20:48 jwd joined #gluster
20:54 mhulsman joined #gluster
20:56 social joined #gluster
20:58 JoeJulian alvinstarr: Since it failed on "peer" the answer /should/ be on that peer - whichever peer that is. Not impressed with the amount of detail in that error message.
21:13 mhulsman joined #gluster
21:22 arpu joined #gluster
21:42 hchiramm joined #gluster
21:51 ira joined #gluster
22:11 niknakpaddywak joined #gluster
22:22 ahino joined #gluster
22:34 siel joined #gluster
23:02 pjrebollo joined #gluster
23:16 siel joined #gluster
23:39 johnmilton joined #gluster
23:57 derjohn_mobi joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary