Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 hchiramm_ joined #gluster
00:27 DaKnOb joined #gluster
00:35 keiviw joined #gluster
00:37 keiviw hi
00:37 glusterbot keiviw: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
00:54 shdeng joined #gluster
01:10 hchiramm_ joined #gluster
01:19 shyam joined #gluster
01:22 keiviw I have installed GlusterFS 3.7.13,and now I got Fsync failures when saving files with O_DIRECT flag in open() and create()
01:22 keiviw I tried to save a file in vi and got this error:
01:22 keiviw test E667:Fsync failed
01:23 Lee1092 joined #gluster
01:23 keiviw the mnt-test.log: [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 102:         FSYNC() ERR => -1 (Invalid argument)
01:23 keiviw [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 174:         FLUSH() ERR => -1 (Invalid argument)
01:24 keiviw the brick logs: E [posix.c:2128:posix_writev] 0-test-posix: write failed: offset 0, Invalid argument
01:24 keiviw I [server3_1-fops.c:1414:server_writev_cbk] 0-test-server: 8569840: WRITEV 5 (526a3118-9994-429e-afc0-4aa063606bde) ==> -1 (Invalid argument)
01:25 keiviw I have checked the page-aligned, i.e. the file was larger than one page, a part of the file(one page size) was saved successfully, and the rest(more than one page but less than two pages) was lost
01:32 harish joined #gluster
01:43 B21956 joined #gluster
01:45 kukulogy joined #gluster
01:48 Gnomethrower joined #gluster
01:48 kukulogy Hello goodmorning, I'm new in gluster. I have a question if how do you properly maintain Striped volume? I'm curious what will happen if one of the bricks which contain a chunks of your file fail?
02:08 nishanth joined #gluster
02:10 daMaestro joined #gluster
02:53 kukulogy joined #gluster
02:54 ndk_ joined #gluster
02:58 arcolife joined #gluster
03:24 sanoj joined #gluster
03:25 glustin joined #gluster
03:26 magrawal joined #gluster
03:29 keiviw_1 joined #gluster
03:31 kramdoss_ joined #gluster
03:46 RameshN joined #gluster
03:47 atinm joined #gluster
03:51 devyani7 joined #gluster
04:01 atinm joined #gluster
04:01 nbalacha joined #gluster
04:05 poornimag joined #gluster
04:15 nehar joined #gluster
04:19 shubhendu joined #gluster
04:21 MikeLupe joined #gluster
04:26 kotreshhr joined #gluster
04:32 Manikandan joined #gluster
04:34 pranithk1 joined #gluster
04:34 pranithk1 keiviw: hi
04:57 hchiramm joined #gluster
04:58 prasanth joined #gluster
04:58 aravindavk joined #gluster
05:01 jiffin joined #gluster
05:04 aspandey joined #gluster
05:06 level7 joined #gluster
05:08 nbalacha joined #gluster
05:08 kramdoss_ joined #gluster
05:14 ndarshan joined #gluster
05:14 Bhaskarakiran joined #gluster
05:15 hchiramm joined #gluster
05:17 karthik_ joined #gluster
05:18 sakshi joined #gluster
05:20 Bhaskarakiran joined #gluster
05:20 nishanth joined #gluster
05:25 _ndevos joined #gluster
05:25 _ndevos joined #gluster
05:25 Gnomethrower joined #gluster
05:27 archit_ joined #gluster
05:27 cliluw joined #gluster
05:28 ashiq joined #gluster
05:30 bkolden joined #gluster
05:34 rafi joined #gluster
05:41 Apeksha joined #gluster
05:41 nbalacha joined #gluster
05:49 ramky joined #gluster
05:51 kramdoss_ joined #gluster
05:54 skoduri joined #gluster
06:01 satya4ever joined #gluster
06:07 hgowtham joined #gluster
06:09 jtux joined #gluster
06:18 msvbhat joined #gluster
06:19 sakshi joined #gluster
06:26 kshlm joined #gluster
06:27 pur joined #gluster
06:28 Klas how well does it work to have client in mismatch version with server?
06:29 Klas to be more specific, I realized the headache in building a 3.7.# in ubuntu precise, and I am wondering if 3.6.# client will work well with 3.7.13 server?
06:31 Saravanakmr joined #gluster
06:37 keiviw The dentries and inodes are cached in GlusterFS server even GlusterFS client(mount by fuse) fuse kernerl provides metadata cache??
06:37 kdhananjay joined #gluster
06:38 devyani7_ joined #gluster
06:59 devyani7_ joined #gluster
07:02 jri joined #gluster
07:03 derjohn_mob joined #gluster
07:10 ppai joined #gluster
07:19 ivan_rossi joined #gluster
07:20 karthik_ joined #gluster
07:21 Saravanakmr joined #gluster
07:25 fsimonce joined #gluster
07:49 aspandey joined #gluster
07:55 derjohn_mob joined #gluster
07:58 Slashman joined #gluster
08:04 hackman joined #gluster
08:06 Seth_Karlo joined #gluster
08:07 armyriad joined #gluster
08:07 JesperA joined #gluster
08:08 deniszh joined #gluster
08:23 muneerse joined #gluster
08:25 level7_ joined #gluster
08:27 auzty joined #gluster
08:29 armyriad joined #gluster
08:49 Muthu joined #gluster
08:54 [Enrico] joined #gluster
08:57 Vaizki joined #gluster
08:58 derjohn_mob joined #gluster
09:06 atalur joined #gluster
09:10 kramdoss_ joined #gluster
09:11 cloph hey * - can I query the current direct-io-mode of a fuse mount somehow? /proc/mounts or mount command itself doesn't list it...
09:14 post-factum cloph: probably, statedump could tell you
09:14 * cloph googles on how to do a statedump - thanks!
09:16 Saravanakmr joined #gluster
09:16 cloph unfortunately grepping for direct in that dump doesn't turn up anything..
09:19 cloph I think I'll have to try explicit remounts
09:20 post-factum cloph: no, statedump holds that info
09:20 post-factum cloph: under xlator.mount.fuse.priv section
09:20 post-factum cloph: like direct_io_mode=0
09:20 post-factum (v3.7.13 here)
09:23 cloph hmm. sudo grep fuse /var/run/gluster/srv-backup-gluster.1481.dump.1468401333 returns nothing (did do a sudo gluster volume statedump backup-berta all to create the statedump) 3.7.11 here
09:23 karthik_ joined #gluster
09:24 eryc joined #gluster
09:24 post-factum cloph: do you make statedump on client side?
09:25 cloph in this case the mount is on the server - it is a backup volume (only one brick) using gluster for geo-replication
09:25 rastar joined #gluster
09:26 post-factum and how did you take that statedump?
09:26 cloph see above - using "sudo gluster volume statedump backup-berta all"
09:26 post-factum no
09:26 post-factum kill -USR1 pid_of_glusterfs_client_process
09:28 cloph ah, should have read the page to the bottom :-)
09:30 cloph direct_io_mode=0
09:30 cloph \o/
09:30 cloph thanks once again
09:30 post-factum np
09:36 level7 joined #gluster
10:00 snila i see fixed-uid and fixed-gid is mentioned on this page: http://www.gluster.org/community/documentation/index.php/Translators/features/filter
10:00 snila is this implemented? and if so, how do i set this option?
10:01 arcolife joined #gluster
10:03 B21956 joined #gluster
10:08 B21956 joined #gluster
10:13 msvbhat joined #gluster
10:16 derjohn_mob joined #gluster
10:23 Gnomethrower joined #gluster
10:33 kshlm joined #gluster
10:35 nbalacha joined #gluster
10:40 ira joined #gluster
10:46 rocketeer125 joined #gluster
10:47 rocketeer125 Hey is there something wrong with the GlusterFS repo? http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/
10:47 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST/CentOS (at download.gluster.org)
10:48 anoopcs rocketeer125, RPMs for RHEL, CentOS, and other RHEL Clones are available from the
10:48 anoopcs CentOS Storage SIG.
10:49 anoopcs rocketeer125, See the notes @ http://download.gluster.org/pub/gluster/glusterfs/LATEST/
10:49 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST (at download.gluster.org)
10:49 rocketeer125 @anoopcs Great, thanks - might want to update http://www.gluster.org/community/documentation/index.php/Getting_started_install
10:50 anoopcs rocketeer125, Oh...Thanks for pointing it out.
10:51 armyriad joined #gluster
10:51 Manikandan joined #gluster
10:52 anoopcs rocketeer125, But you can go through the official documentation from gluster.readthedocs.org.
10:54 msvbhat joined #gluster
10:57 anoopcs Ahh....We need to fix there too.
10:58 anoopcs snila, I could see the presence of filter translator in source but I think it's implementation is partial and not well tested.
11:00 karthik_ joined #gluster
11:07 MrAbaddon joined #gluster
11:08 rafaels joined #gluster
11:08 armyriad joined #gluster
11:12 kshlm joined #gluster
11:12 rocketeer125 @anoopcs The same broken link is in the official docs: http://gluster.readthedocs.io/en/latest/Install-Guide/Install/
11:12 anoopcs rocketeer125, That's what I said before.. We must fix it.
11:13 rocketeer125 @anoops thanks
11:13 anoopcs rocketeer125, np.
11:14 anoopcs rocketeer125, Would you mind raising a Pull Request for the same at https://github.com/gluster/glusterdocs/blob/master/Install-Guide/Install.md?
11:14 glusterbot Title: glusterdocs/Install.md at master · gluster/glusterdocs · GitHub (at github.com)
11:16 armyriad joined #gluster
11:16 rocketeer125 @anoopcs ok
11:17 anoopcs rocketeer125++ Cool.
11:17 glusterbot anoopcs: rocketeer125's karma is now 1
11:23 prasanth joined #gluster
11:26 Manikandan joined #gluster
11:30 robb_nl joined #gluster
11:32 kkeithley Gluster Community Meeting starts in 30min in #gluster-meeting
11:33 rwheeler joined #gluster
11:34 wadeholler joined #gluster
11:35 jiffin joined #gluster
11:36 armyriad joined #gluster
11:37 devyani7_ joined #gluster
11:37 MikeLupe joined #gluster
11:39 johnmilton joined #gluster
11:51 ghollies joined #gluster
11:51 ppai joined #gluster
11:53 rafaels joined #gluster
11:54 pur joined #gluster
11:58 pur joined #gluster
11:58 kaushal_ joined #gluster
11:59 wadeholler joined #gluster
11:59 pur joined #gluster
11:59 rocketeer125 anoops: CentOS SIG release repo only works for CentOS - RHEL seems no longer covered. Happy to submit PR but there's a gap: http://pastie.org/10905957
11:59 rocketeer125 anoopcs
11:59 kkeithley Gluster Community Meeting starts now in #gluster-meeting
12:00 level7 joined #gluster
12:01 jdarcy joined #gluster
12:07 ghollies Hello, I was wondering if anyone knew of an easy way to get the volume id on a box that is mounting the volume. It is mounting the volume using the native client (ie: mount -t glusterfs <ip>:/<volumename> <locationToMountTo>
12:11 post-factum ghollies: maybe, statedump has the info you need
12:13 nbalacha joined #gluster
12:13 unforgiven512 joined #gluster
12:14 ghollies post-factum: when running the command from the client box, returns that the volume doesn't exists (which makes sense since, its not part of the volume directly)
12:14 unforgiven512 joined #gluster
12:14 post-factum ghollies: what? which command?
12:14 ghollies gluster volume statedump <volume name>
12:14 post-factum ghollies: no
12:14 unforgiven512 joined #gluster
12:15 post-factum ghollies: https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/
12:15 glusterbot Title: Statedump - Gluster Docs (at gluster.readthedocs.io)
12:15 post-factum ghollies: kill -USR1
12:15 unforgiven512 joined #gluster
12:16 unforgiven512 joined #gluster
12:16 unforgiven512 joined #gluster
12:17 unforgiven512 joined #gluster
12:17 unforgiven512 joined #gluster
12:18 unforgiven512 joined #gluster
12:19 Saravanakmr joined #gluster
12:28 rafaels joined #gluster
12:36 ppai joined #gluster
12:37 R0ok_ joined #gluster
12:50 derjohn_mob joined #gluster
12:52 rwheeler joined #gluster
13:03 kshlm joined #gluster
13:06 robb_nl joined #gluster
13:10 ashiq joined #gluster
13:12 ndevos ghollies: there is also a /<mountpoint/.meta/ directory, maybe it contains the volume-id - the meta-stuff might need a volume option to be turned on?
13:12 ndevos ghollies: you can also try to read the trusted.glusterfs.volume-id (or similar?) extended attribute with getfattr
13:21 muneerse2 joined #gluster
13:23 jesk /join #opennms
13:26 julim joined #gluster
13:30 poornimag joined #gluster
13:33 julim joined #gluster
13:34 julim joined #gluster
13:40 harish joined #gluster
13:41 plarsen joined #gluster
13:47 dnunez joined #gluster
13:52 atalur joined #gluster
13:53 jdarcy joined #gluster
13:56 [Enrico] joined #gluster
13:58 nehar joined #gluster
14:03 kshlm joined #gluster
14:04 ivan_rossi left #gluster
14:08 ic0n_ joined #gluster
14:09 jesk I moved my homedir on gluster
14:10 shyam1 joined #gluster
14:10 jesk starting chrome takes like 20 seconds on 24-core xeon server :D
14:10 jesk anything I can optimize there?
14:11 F2Knight joined #gluster
14:15 Manikandan joined #gluster
14:18 msvbhat joined #gluster
14:32 aravindavk joined #gluster
14:33 skylar joined #gluster
14:34 level7 joined #gluster
14:38 jiffin joined #gluster
14:40 bowhunter joined #gluster
14:41 sanoj joined #gluster
14:43 derjohn_mob joined #gluster
14:48 plarsen joined #gluster
14:50 hackman joined #gluster
14:58 pranithk1 joined #gluster
15:04 atinm joined #gluster
15:04 wushudoin joined #gluster
15:07 ghollies ndevos: I'll look into that .meta folder. im not seeing it on the mount currently though. (any suggestions on how to turn it on would be appreciated :) ) also I can only get the trusted.glusterfs.volume-id attribute off of the brick, but not on the mount
15:11 rocketeer125 joined #gluster
15:12 Siavash joined #gluster
15:12 Siavash joined #gluster
15:20 emmajane joined #gluster
15:21 ramky joined #gluster
15:27 plarsen joined #gluster
15:42 JoeJulian ghollies: you won't see it in a directory listing. It's there though.
15:44 derjohn_mob joined #gluster
15:52 ghollies JoeJulian: neat, thanks!
15:53 Gnomethrower joined #gluster
15:54 skoduri joined #gluster
15:57 julim_ joined #gluster
16:00 robb_nl joined #gluster
16:06 chirino joined #gluster
16:08 ramky joined #gluster
16:19 ashiq joined #gluster
16:22 kpease joined #gluster
16:24 Apeksha joined #gluster
16:28 muneerse joined #gluster
16:31 muneerse2 joined #gluster
16:33 squizzi_ joined #gluster
16:46 muneerse joined #gluster
16:46 Siavash joined #gluster
16:46 Siavash joined #gluster
16:47 karnan joined #gluster
16:53 shaunm joined #gluster
16:53 nishanth joined #gluster
16:56 shubhendu joined #gluster
16:59 julim joined #gluster
17:09 ashiq joined #gluster
17:23 ashiq_ joined #gluster
17:32 gluster-newbie joined #gluster
17:36 gluster-newbie I have a replicated 3-node cluster and at one point, I am copying about 16 large files to the mount.  During this time, we get the error message "[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-appian-client-2: server 10.125.10.102:49152 has not responded in the last 42 seconds, disconnecting." and the client becomes read-only.
17:37 gluster-newbie The way we fixed it was that we change the network.ping-timeout to be a much larger number.  Another way that worked was we changed the "performance.io-thread-count" to be 64.
17:38 gluster-newbie Changing the network.ping-timeout sounds more dangerous than changing the "performance.io-thread-count". Does that sound right?
17:39 gluster-newbie And are there any negatives to changing the io-thread-count other than a possible increase in CPU usage?
17:45 JoeJulian gluster-newbie: you can certainly lenthen the ping-timeout. I'd be looking at why the server becomes unresponsive.
17:46 JoeJulian *lengthen*
17:47 JoeJulian And I haven't been able to find any benefit or detriment to io-thread-count in my tests.
17:52 gluster-newbie Sounds good, JoeJulian.  My guess is that all the threads are taken due to the large file copying and therefore, when the client pings to check if the server is there, it doesn't get a response within 42 seconds, so therefore it decides to disconnect.  The disconnect happens to two servers out of three, so therefore, the client doesn't have quorum and turns to read-only mode.  Increasing the ping-timeout gives the c
17:55 gluster-newbie Increasing the ping-timeout probably just gives the server more time to finish some of the copying so that it can free up a single thread to response to the client's ping.
17:55 muneerse2 joined #gluster
17:55 derjohn_mob joined #gluster
17:55 gluster-newbie Increasing the thread count somewhat increases the chance that a thread is available when a ping request occurs from the client
17:56 JoeJulian But pings shouldn't be part of the io-thread queue.
17:57 JoeJulian Are you using deadline or noop?
18:00 JoeJulian Also look at adjusing disk caching: https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
18:00 glusterbot Title: Better Linux Disk Caching & Performance with vm.dirty_ratio (at lonesysadmin.net)
18:02 ben453 joined #gluster
18:04 unclemarc joined #gluster
18:04 guhcampos_ joined #gluster
18:04 gluster-newbie I agree that the pings probably shouldn't be apart of the io-thread queue.  It looks like it's deadline
18:12 JoeJulian Yeah, just double checked. Pings are part of the rpc library and are implmented by the client and server translators (the translators that actually interface with the network). They never get to the performance.io-thread translator.
18:12 rafi joined #gluster
18:17 gluster-newbie Somewhat old post, but it's possible that we are having similar issues with this: https://bugzilla.redhat.com/show_bug.cgi?id=1096729
18:17 glusterbot Bug 1096729: low, urgent, ---, rjoseph, CLOSED DEFERRED, Disconnects of peer and brick is logged while snapshot creations were in progress during IO
18:18 gluster-newbie Somehow the bricks are under heavy load that it can't respond to a client ping
18:18 glusterbot gluster-newbie: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
18:18 JoeJulian Hush glusterbot.
18:18 JoeJulian Someone should fix that regex match...
18:18 * JoeJulian looks around for someone else to blame.
18:19 gluster-newbie lol
18:19 JoeJulian That's why I suggested looking at the vm.dirty settings. That can cause unresponsiveness.
18:32 nishanth joined #gluster
18:41 gluster-newbie *playing around with vm.dirty*
18:42 guhcampos joined #gluster
18:45 B21956 joined #gluster
18:48 rafi joined #gluster
18:48 jiffin joined #gluster
18:54 rafi joined #gluster
19:05 robb_nl joined #gluster
19:17 khurram joined #gluster
19:17 jwd joined #gluster
19:21 khurram hello; need some help; i have glusterfsd running on two raspberry pi; trying to connect to it from Ubuntu 16.04 but failing :|
19:23 khurram gluster version on raspberry pi is 3.5.2; have installed 3.5.2-4 glusterfs-common and glusterfs-client on ubuntu
19:23 khurram when i try to mount; it fails with readv on pi-ip:24007 failed (No data available); any help ?
19:24 JoeJulian khurram: check the glusterd log on the pi: /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
19:24 khurram from ubuntu; telnet to pi-ip 24007 is working!
19:27 khurram joejulia, its saying  0-glusterd: Request received from non-privileged port. Failing request
19:27 JoeJulian NAT?
19:28 khurram yes! i try from another machine not behind nat; thanks for the tip (Y)
19:31 JoeJulian If it were me, I would set up a vpn or vxlan.
19:35 Wizek joined #gluster
19:46 chirino joined #gluster
20:40 robb_nl joined #gluster
20:48 shyam joined #gluster
20:58 julim joined #gluster
21:07 deniszh joined #gluster
21:26 gbox Does anyone use symbolic links or understand how gluster handles them?  Gluster complains about any link pointing outside the volume, logs it as a "malformed internal link"
21:46 gbox OK maybe this was fixed: http://review.gluster.org/#/c/10999/6//COMMIT_MSG
21:47 glusterbot Title: Gerrit Code Review (at review.gluster.org)
21:54 JoeJulian gbox: Looks like the bug that patch fixed never made it to release, so you should have never encountered it.
22:04 gbox JoeJulian: Thanks I didn't understand that commit message.  Symbolic links have been an ongoing issue with gluster.  The posix translator logs errors for every symbolic link pointing outside the volume.
22:04 JoeJulian I've never had a problem using symbolic links, but I also ignore logs unless I have a problem or they throw an unexpected error.
22:05 gbox At the time the symbolic links are created, I believe
22:08 gbox JoeJulian: Ha that is hilarious.  I'll take your advice!
22:36 F2Knight joined #gluster
22:56 Hanefr joined #gluster
22:58 pampan joined #gluster
22:58 johnmilton joined #gluster
23:00 pampan Hi guys. I'm stuck healing some files, I have all the bricks mounted, I stat the file but on one of the bricks, the file it's only a 0-sized file. In the other two breaks I have it's ok. Any clue of what can be wrong here? I'm using 3.5.7, but updating is not an option.
23:02 pampan Interestingly, only the files that were shown with the full path by 'gluster volume heal myvolume info' weren't eable to be healed. I was able to heal the ones that were listed as gfid.
23:04 john51 joined #gluster
23:04 pampan The output also shows on those files: Possibly undergoing heal
23:04 pampan but it's not true
23:05 johnmilton joined #gluster
23:07 nhayashi joined #gluster
23:09 guhcampos joined #gluster
23:10 jbrooks_ joined #gluster
23:32 glustin joined #gluster
23:33 JoeJulian pampan: is the brick with the 0 sized file full?
23:51 pampan JoeJulian: it's not
23:51 pampan What I did was mv the files away from the volume and put them back again
23:51 pampan that worked
23:57 jbrooks_ joined #gluster
23:58 jbrooks joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary