Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 pjrebollo joined #gluster
00:58 ankitr joined #gluster
00:59 shdeng joined #gluster
01:02 ankitr joined #gluster
01:04 cliluw joined #gluster
01:06 kramdoss_ joined #gluster
01:39 haomaiwang joined #gluster
01:51 arpu joined #gluster
02:06 jbrooks joined #gluster
02:07 Gambit15 joined #gluster
02:11 ankitr joined #gluster
02:13 derjohn_mobi joined #gluster
02:13 haomaiwang joined #gluster
02:14 ahino joined #gluster
02:16 pjrebollo joined #gluster
02:23 PaulCuzner joined #gluster
02:23 PaulCuzner left #gluster
02:31 squizzi joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 RameshN joined #gluster
02:50 squizzi joined #gluster
03:03 d4n13L_ joined #gluster
03:03 jocke- joined #gluster
03:05 scubacuda_ joined #gluster
03:05 kkeithle joined #gluster
03:05 d-fence__ joined #gluster
03:06 atrius_ joined #gluster
03:06 sloop- joined #gluster
03:06 rwheeler_ joined #gluster
03:06 rideh- joined #gluster
03:06 ashka` joined #gluster
03:06 jarbod__ joined #gluster
03:07 arif-ali_ joined #gluster
03:07 shruti` joined #gluster
03:07 Acinonyx joined #gluster
03:07 tg2_ joined #gluster
03:07 bfoster joined #gluster
03:08 shdeng joined #gluster
03:09 john51 joined #gluster
03:11 marlinc joined #gluster
03:13 haomaiwang joined #gluster
03:14 ankitr joined #gluster
03:17 steveeJ joined #gluster
03:25 kramdoss_ joined #gluster
03:34 kdhananjay joined #gluster
03:35 ankitr joined #gluster
03:37 nbalacha joined #gluster
03:44 arpu joined #gluster
03:48 gyadav_ joined #gluster
03:49 arpu joined #gluster
03:52 magrawal joined #gluster
03:54 _ndevos joined #gluster
03:54 _ndevos joined #gluster
04:05 skumar joined #gluster
04:07 skumar_ joined #gluster
04:12 atinm joined #gluster
04:13 haomaiwang joined #gluster
04:16 rafi joined #gluster
04:21 msvbhat joined #gluster
04:22 ppai joined #gluster
04:24 rafi1 joined #gluster
04:26 itisravi joined #gluster
04:26 rafi joined #gluster
04:28 pjrebollo joined #gluster
04:33 sanoj joined #gluster
04:37 nbalacha joined #gluster
04:38 mb_ joined #gluster
04:45 apandey joined #gluster
04:47 ankitr joined #gluster
04:55 buvanesh_kumar joined #gluster
05:08 BitByteNybble110 joined #gluster
05:11 jiffin joined #gluster
05:13 ndarshan joined #gluster
05:13 skoduri joined #gluster
05:23 unlaudable joined #gluster
05:26 Prasad joined #gluster
05:28 susant joined #gluster
05:31 karthik_us joined #gluster
05:34 prasanth joined #gluster
05:41 Humble joined #gluster
05:44 jiffin1 joined #gluster
05:48 riyas joined #gluster
05:50 nbalacha joined #gluster
05:55 sbulage joined #gluster
06:02 susant joined #gluster
06:02 msvbhat joined #gluster
06:12 sona joined #gluster
06:13 ankitr_ joined #gluster
06:14 Saravanakmr joined #gluster
06:17 Philambdo joined #gluster
06:21 Shu6h3ndu joined #gluster
06:24 faizy joined #gluster
06:26 rafi joined #gluster
06:33 Karan joined #gluster
06:41 kramdoss_ joined #gluster
06:41 mrpops2ko joined #gluster
06:48 buvanesh_kumar joined #gluster
06:50 nthomas joined #gluster
06:50 Saravanakmr joined #gluster
07:01 poornima_ joined #gluster
07:01 buvanesh_kumar joined #gluster
07:01 mhulsman joined #gluster
07:03 jkroon joined #gluster
07:04 k4n0 joined #gluster
07:06 saintpablos joined #gluster
07:08 jiffin1 joined #gluster
07:09 ashiq joined #gluster
07:20 mhulsman joined #gluster
07:23 mhulsman1 joined #gluster
07:29 derjohn_mobi joined #gluster
07:31 jtux joined #gluster
07:31 unlaudable joined #gluster
07:46 R0ok_ joined #gluster
08:05 mhulsman joined #gluster
08:06 arpu joined #gluster
08:08 [diablo] joined #gluster
08:20 mhulsman joined #gluster
08:23 Wizek_ joined #gluster
08:27 joshin joined #gluster
08:28 kramdoss_ joined #gluster
08:32 fsimonce joined #gluster
08:33 rjoseph joined #gluster
08:36 sanoj joined #gluster
08:43 ShwethaHP joined #gluster
08:45 MikeLupe joined #gluster
08:51 apandey joined #gluster
09:04 hgowtham joined #gluster
09:13 panina joined #gluster
09:15 saintpablo joined #gluster
09:19 pulli joined #gluster
09:20 nh2 joined #gluster
09:20 derjohn_mobi joined #gluster
09:29 shutupsquare joined #gluster
09:31 ashiq joined #gluster
09:32 pulli joined #gluster
09:53 kotreshhr joined #gluster
09:54 mdavidson joined #gluster
10:02 jiffin1 joined #gluster
10:07 R0ok__ joined #gluster
10:07 mhulsman joined #gluster
10:09 nh2 joined #gluster
10:26 Gambit15 joined #gluster
10:35 jiffin1 joined #gluster
10:41 msvbhat joined #gluster
10:53 Seth_Karlo joined #gluster
10:53 Seth_Karlo joined #gluster
10:58 Seth_Kar_ joined #gluster
11:03 kotreshhr left #gluster
11:07 nh2 joined #gluster
11:18 nishanth joined #gluster
11:22 mhulsman joined #gluster
11:39 pjrebollo joined #gluster
11:41 pjreboll_ joined #gluster
11:46 atinm joined #gluster
12:02 jdarcy joined #gluster
12:04 shyam joined #gluster
12:11 shutupsq_ joined #gluster
12:12 pjrebollo joined #gluster
12:13 derjohn_mob joined #gluster
12:18 atinm joined #gluster
12:40 pjrebollo joined #gluster
12:42 msvbhat joined #gluster
12:43 sanoj joined #gluster
12:49 ashka joined #gluster
12:49 ashka joined #gluster
13:01 ahino joined #gluster
13:02 skoduri joined #gluster
13:02 jkroon joined #gluster
13:05 msvbhat joined #gluster
13:15 faizy joined #gluster
13:26 atinm joined #gluster
13:37 Saravanakmr joined #gluster
13:40 msvbhat joined #gluster
13:53 unclemarc joined #gluster
13:54 ankitr joined #gluster
13:55 mhulsman joined #gluster
13:59 Gambit15 Hey guys, I'm using Gluster 3.8.8 & trying to decide whther or not I should enable cluster.locking-scheme: granular
14:00 Gambit15 "Description: If this option is set to granular, self-heal will stop being compatible with afr-v1, which helps afr be more granular while self-healing"
14:00 Gambit15 I'm currently testing the following, for VM hosting...
14:00 Gambit15 cluster.granular-entry-heal: enable; cluster.locking-scheme: granular; cluster.eager-lock: enable; features.shard: on; features.shard-block-size: 512MB
14:03 rwheeler joined #gluster
14:03 Marcux joined #gluster
14:04 Marcux I need some help as my users run into a glusterfs problem
14:04 Gambit15 I'm a bit confused as to whether or not "cluster.granular-entry-heal: enable" & "cluster.locking-scheme: granular" are incompatible in that case? The objective is to only lock & heal the files bit by bit, rather than trying to lock & repair the entire file in one go
14:05 Marcux On my gluster server I have a program, if I run that program from I client it works fine
14:05 itisravi joined #gluster
14:05 Gambit15 ...I've also seen noted that "cluster.locking-scheme: granular" prevents spurious heals being reported, which I think is currently happening with my volumes...
14:06 Gambit15 Marcux, any more info than "I have a problem"?
14:06 Marcux but if I put that process in the background and start the program again the second instance hangs until the first is finished
14:07 Gambit15 What program? What's your volume configuration & what has it got to do with said program?
14:08 Marcux It was a production program, but I created a dummy program that runs forever to test that I get the same problem.
14:09 Marcux For the moment I am using a single node mounted with fuse on the client
14:10 Marcux The gluster server is a single brick
14:12 Marcux Type: Distribute
14:12 Marcux performance.readdir-ahead: on
14:13 Marcux Otherwise just a standard setup
14:14 Gambit15 So distribute=1?
14:14 Gambit15 Is the client on a different server to the brick?
14:15 Gambit15 What does this program do? Does it execute from the mounted gluster volume, or just access data from it?
14:15 Marcux I run it like this:
14:16 Marcux I have a client and mount the glusterfs with: mount.glusterfs ip.addr:/volumename /mnt/dir
14:17 Marcux In /mnt/dir I have a program say dummy, os I run it with: /mnt/dir/dummy &
14:17 side_control joined #gluster
14:17 Marcux Then I start the program again: /mnt/dir/dummy
14:18 Marcux The second time the program hangs and does not start to execute until I do a kill on the first process
14:19 Marcux ...knowing this as my program writes to stdout every second
14:21 Marcux I was able to replicate exactly the same behavior setting up a virt server and a virt client
14:22 rwheeler joined #gluster
14:23 Seth_Karlo joined #gluster
14:25 itisravi Gambit15: If all your clients are also on 3.8.8, then you can set cluster.locking-scheme to granular without any issues.
14:25 Gambit15 Marcux, so you're saying you're unable to read the file simultaneously from the same client?
14:25 Gambit15 itisravi, perfect, cheers!
14:26 itisravi Gambit15: afr-v1 code is there only in 3.5 or older, so as long as your gluster bits are 3.6 or newer you should be fine.
14:26 Seth_Karlo joined #gluster
14:27 Gambit15 Marcux, by the way, have you tested doing the same with your "program" running off a local disk, not on the gluster volume?
14:28 Marcux No I do not have any problems accessing the same file simultaneously as I have tested reading from same file at the same time from two processes. The problem is executing the same program simultaneously
14:28 BettyBooop joined #gluster
14:28 Gambit15 itisravi, whilst you're here, could you tell me what the benefit is of smaller v larger shards?
14:28 skoduri joined #gluster
14:29 Gambit15 I commonly see recomendations for both 64MB & 512MB
14:29 Gambit15 For the time being I've chosen 512MB as I thought that'd provide better sequential I/O for my VMs
14:30 skylar joined #gluster
14:30 Marcux Just tested to run my program localy from my harddrive, and that works fine. Runs in parallel
14:30 Gambit15 Marcux, try doing the same on a local, non-gluster mount
14:30 Gambit15 ok
14:32 Marcux Is there some sort of setting that prevent this type of execution of programs?
14:33 Gambit15 AFAIK, simply executing a file, loading it into memory, shouldn't request a lock on that file.
14:34 Gambit15 Perhaps statedump could help?
14:34 Gambit15 https://gluster.readthedocs.io/en/la​test/Administrator%20Guide/Troublesh​ooting/#troubleshooting-file-locks
14:34 itisravi Gambit15: From a replication point of view, smaller shard size would mean faster healing time.
14:34 glusterbot Title: Troubleshooting - Gluster Docs (at gluster.readthedocs.io)
14:34 derjohn_mob joined #gluster
14:35 Gambit15 itisravi, but what're the potential downsides? I mean, if there were none, then shards would be configured down to even just 1MB...
14:35 Gambit15 Is my I/O assumption correct in that case?
14:38 itisravi Gambit15: I think sharding is currently supported only for single writer workloads (like the VM use case). kdhananjay would be the right person to answer your queries.
14:40 gem joined #gluster
14:42 Karan joined #gluster
14:42 Gambit15 K, thanks!
14:44 vbellur joined #gluster
14:47 mhulsman joined #gluster
14:54 kpease joined #gluster
14:55 ADynamic joined #gluster
14:55 saintpablo joined #gluster
14:56 vbellur left #gluster
14:57 mbukatov joined #gluster
14:59 nbalacha joined #gluster
15:01 kshlm Gluster Community Meeting is starting now in #gluster-meeting
15:01 vbellur joined #gluster
15:01 kshlm Gluster Community Meeting is starting now in #gluster-meeting
15:01 ADynamic left #gluster
15:02 ira joined #gluster
15:02 mhulsman joined #gluster
15:05 susant left #gluster
15:10 jaank_ joined #gluster
15:12 farhorizon joined #gluster
15:17 vbellur joined #gluster
15:21 ADynamic` joined #gluster
15:22 Gambit15 joined #gluster
15:24 RameshN_ joined #gluster
15:25 scuttle|afk joined #gluster
15:25 ADynamic` left #gluster
15:31 ic0n joined #gluster
15:33 nh2 joined #gluster
15:33 RameshN_ joined #gluster
15:37 Marcux left #gluster
15:38 vbellur joined #gluster
15:53 sanoj joined #gluster
15:55 msvbhat joined #gluster
15:56 wushudoin joined #gluster
15:56 ahino joined #gluster
16:00 k4n0 joined #gluster
16:01 vbellur joined #gluster
16:04 sbulage joined #gluster
16:08 vbellur joined #gluster
16:09 RameshN__ joined #gluster
16:11 Gambit15 Ugh, I tried to snapshot & clone a volume, however gluster reported a problem during the clone process. Now I've got a new volume listed on the node I executed the commands on, but I can't do anything with it (inc. delete it) as gluster complains the volume doesn't exist on the other peers
16:11 RameshN joined #gluster
16:14 Gambit15 Looking through the logs, Gluster logged a few errors in not being able to snapshot the bricks on the other nodes (although the command didn't return any complaints at the time), and the clone failed because of the failed snapshots.
16:14 Gambit15 The snapshot appears in the list on all of the nodes, but the cloned volume only appears on the first node where I executed the command
16:15 skumar joined #gluster
16:25 ahino joined #gluster
16:31 shutupsquare joined #gluster
16:47 XpineX joined #gluster
16:48 jdossey joined #gluster
16:48 vbellur joined #gluster
16:49 Seth_Kar_ joined #gluster
16:49 farhorizon joined #gluster
16:51 skoduri joined #gluster
16:56 msvbhat joined #gluster
16:59 ahino joined #gluster
17:00 shutupsquare Hi, I keep seeing lots of errors paging by on a 3 node cluster: 0-webdata-client-0: remote operation failed: Stale file handle
17:01 shutupsquare I guess I need to understand what is a remote operation? and then why so many stale file handles. Anybody help?
17:09 faizy joined #gluster
17:17 Saravanakmr joined #gluster
17:19 vbellur joined #gluster
17:24 Seth_Karlo joined #gluster
17:26 Seth_Kar_ joined #gluster
17:27 susant joined #gluster
17:39 ivan_rossi left #gluster
17:45 skumar_ joined #gluster
18:01 plarsen joined #gluster
18:05 msvbhat joined #gluster
18:21 Karan joined #gluster
18:23 baber joined #gluster
18:30 vbellur joined #gluster
18:30 farhoriz_ joined #gluster
18:33 alvinstarr joined #gluster
18:35 victori joined #gluster
18:39 vbellur joined #gluster
18:56 irated joined #gluster
19:02 pulli joined #gluster
19:05 vbellur joined #gluster
19:09 baber joined #gluster
19:15 ahino joined #gluster
19:19 shutupsquare joined #gluster
19:21 nh2 joined #gluster
19:40 farhorizon joined #gluster
19:54 pulli joined #gluster
20:10 thatgraemeguy joined #gluster
20:10 thatgraemeguy joined #gluster
20:13 Philambdo joined #gluster
20:21 rafi joined #gluster
20:23 ira joined #gluster
20:30 pjreboll_ joined #gluster
20:34 joshin left #gluster
20:42 mhulsman joined #gluster
20:45 msvbhat joined #gluster
20:52 rafi joined #gluster
20:52 panina joined #gluster
20:55 rafi joined #gluster
21:00 derjohn_mob joined #gluster
21:02 nh2 joined #gluster
21:07 pulli joined #gluster
21:14 jdossey joined #gluster
21:15 vbellur joined #gluster
21:18 a2 joined #gluster
21:28 farhoriz_ joined #gluster
21:43 Philambdo joined #gluster
21:46 vbellur joined #gluster
21:46 nh2 joined #gluster
21:50 rwheeler joined #gluster
21:51 pjrebollo joined #gluster
22:12 ashiq joined #gluster
22:14 Seth_Karlo joined #gluster
22:25 chjohnst joined #gluster
22:27 chjohnst seeing something weird with gluster with a simple stripe of 2 bricks, and a single client.. I can write 300MB files sequentially with FIO but once the files are larger then 400+ the process goes into a D state hanging on fuse_request_send
22:28 chjohnst gluster version is 3.8.8
22:37 JoeJulian chjohnst: ,,(stripe) isn't really recommended. My initial instinct is to think that your client isn't connected to both servers.
22:37 glusterbot chjohnst: (#1) Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes., or (#2) The stripe translator is deprecated. Consider enabling sharding instead.
22:38 chjohnst hey Joe well strangely in one terminal where I am hung, I can open another terminal and continue writing files at 300MB sizes
22:39 chjohnst so creating striped volumes, although supported in the command line will soon be deprecated?
22:43 JoeJulian yes
22:43 JoeJulian shard volumes are much better designed and do what you would expect them to.
22:43 chjohnst ok that is good to know
22:44 chjohnst ill read up on it.. erasure coding is here to stay?
22:44 JoeJulian That is as well, yes.
22:45 chjohnst cool - my workload is mostly a dozen or so clients reading from a large files that are being appended to all day long
22:45 chjohnst bit of sequential access and random read/write
22:59 caitnop joined #gluster
23:04 vbellur joined #gluster
23:05 shutupsq_ joined #gluster
23:08 john51 joined #gluster
23:17 tom[] joined #gluster
23:42 pjrebollo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary