Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 theron joined #gluster
00:33 Ramereth joined #gluster
00:49 hackman joined #gluster
01:03 shyam joined #gluster
01:04 vmallika joined #gluster
01:12 MugginsM joined #gluster
01:12 EinstCrazy joined #gluster
01:17 chirino joined #gluster
01:17 davpostpunk joined #gluster
01:17 davpostpunk hi and goodnight from spain
01:18 davpostpunk i need help with a issue
01:19 amye joined #gluster
01:27 dgandhi joined #gluster
01:29 amye joined #gluster
01:32 MugginsM joined #gluster
01:36 JoeJulian davpostpunk: I just came in here to check the progress of a build I have going at work. If you had asked your question, this wall of text would have been an answer - or at least advice. Unfortunately, by the time you read this I will have left my desk again for the night.
01:53 julim joined #gluster
02:01 EinstCrazy joined #gluster
02:07 DV joined #gluster
02:11 harish joined #gluster
02:12 davpostpunk bad luck for me  JoeJulian
02:13 Lee1092 joined #gluster
02:17 amye left #gluster
02:19 harish joined #gluster
02:19 davpostpunk I've the glusterfs-server and glusterfs client version on version 3.7.8 in a infraestructure 2x2 distributed replicated, and my problem is that sometimes we've peaks that block the apache and tomcat servers, i saw in my brick log thats strings, for example:
02:19 davpostpunk 016-04-28 07:53:49.237072] E [MSGID: 113097] [posix-helpers.c:598:posix_istat] 0-glstvol01-posix: Failed to create handle path for f50466bf-7565-4c39-a28d-9814e8005293/ [Stale file handle]
02:19 davpostpunk The message "E [MSGID: 113077] [posix-handle.c:282:posix_handle_pump] 0-glstvol01-posix: malformed internal link /var/www/okn_cinfa/custom/images/logos/logo_login_custom.png for /data/glstvol01/.glusterfs/f5/04/f50466bf-7565-4c39-a28d-9814e8005293" repeated 3 times between [2016-04-28 07:53:49.236024] and [2016-04-28 07:53:49.237080]
02:19 davpostpunk [2016-04-28 07:53:49.237083] E [MSGID: 113091] [posix.c:178:posix_lookup] 0-glstvol01-posix: Failed to create inode handle for path <gfid:f50466bf-7565-4c39-a28d-9814e8005293>
02:19 davpostpunk [2016-04-28 07:09:46.159414] I [dict.c:473:dict_get] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.8/xlator/features/access-control.so(posix_acl_setxattr_cbk+0x26) [0x7f28d6abea26] -->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.8/xlator/features/access-control.so(handling_other_acl_related_xattr+0x22) [0x7f28d6abe922] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get+0xac) [0x7f28e19928cc] ) 0-dict: !this || key=sy
02:19 glusterbot davpostpunk: ('s karma is now -131
02:19 davpostpunk stem.posix_acl_access [Invalid argument]
02:19 davpostpunk [2016-04-28 07:09:46.159537] I [dict.c:473:dict_get] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.8/xlator/features/access-control.so(posix_acl_setxattr_cbk+0x26) [0x7f28d6abea26] -->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.8/xlator/features/access-control.so(handling_other_acl_related_xattr+0xb5) [0x7f28d6abe9b5] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get+0xac) [0x7f28e19928cc] ) 0-dict: !this || key=sy
02:19 glusterbot davpostpunk: ('s karma is now -132
02:19 davpostpunk stem.posix_acl_default [Invalid argument]
02:19 davpostpunk [2016-04-28 07:09:46.170686] I [dict.c:473:dict_get] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.8/xlator/features/access-control.so(posix_acl_setxattr_cbk+0x26) [0x7f28d6abea26] -->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.8/xlator/features/access-control.so(handling_other_acl_related_xattr+0x22) [0x7f28d6abe922] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get+0xac) [0x7f28e19928cc] ) 0-dict: !this || key=sy
02:19 glusterbot davpostpunk: ('s karma is now -133
02:19 davpostpunk stem.posix_acl_access [Invalid argument]
02:19 davpostpunk [2016-04-28 07:09:46.170759] I [dict.c:473:dict_get] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.8/xlator/features/access-control.so(posix_acl_setxattr_cbk+0x26) [0x7f28d6abea26] -->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.8/xlator/features/access-control.so(handling_other_acl_related_xattr+0xb5) [0x7f28d6abe9b5] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_get+0xac) [0x7f28e19928cc] ) 0-dict: !this || key=sy
02:19 glusterbot davpostpunk: ('s karma is now -134
02:20 davpostpunk stem.posix_acl_default [Invalid argument]
02:20 davpostpunk [2016-04-28 07:13:45.223030] E [MSGID: 113077] [posix-handle.c:282:posix_handle_pump] 0-glstvol01-posix: malformed internal link /var/www/okn_cinfa/custom/images/logos/logo_login_custom.png for /data/glstvol01/.glusterfs/f5/04/f50466bf-7565-4c39-a28d-9814e8005293
02:20 davpostpunk [2016-04-28 07:13:45.223068] E [MSGID: 113097] [posix-helpers.c:598:posix_istat] 0-glstvol01-posix: Failed to create handle path for f50466bf-7565-4c39-a28d-9814e8005293/ [Stale file handle]
02:20 davpostpunk [2016-04-28 07:13:45.223093] E [MSGID: 113091] [posix.c:178:posix_lookup] 0-glstvol01-posix: Failed to create inode handle for path <gfid:f50466bf-7565-4c39-a28d-9814e8005293>
02:20 davpostpunk [2016-04-28 07:13:45.223104] E [MSGID: 113018] [posix.c:196:posix_lookup] 0-glstvol01-posix: lstat on null failed
02:20 davpostpunk [2016-04-28 07:13:45.223121] W [MSGID: 115005] [server-resolve.c:126:resolve_gfid_cbk] 0-glstvol01-server: f50466bf-7565-4c39-a28d-9814e8005293: failed to resolve (Success)
02:20 davpostpunk [2016-04-28 07:13:45.223617] W [MSGID: 115005] [server-resolve.c:126:resolve_gfid_cbk] 0-glstvol01-server: f50466bf-7565-4c39-a28d-9814e8005293: failed to resolve (Success)
02:20 davpostpunk [2016-04-28 07:13:45.224990] E [MSGID: 113037] [posix-handle.c:275:posix_handle_pump] 0-glstvol01-posix: malformed internal link jwplayer.flash.swf for /data/glstvol01/.glusterfs/a1/97/a197ecd5-cb9e-41fb-9a59-6dc865821b42
02:20 davpostpunk [2016-04-28 07:13:45.225044] E [MSGID: 113097] [posix-helpers.c:598:posix_istat] 0-glstvol01-posix: Failed to create handle path for a197ecd5-cb9e-41fb-9a59-6dc865821b42/ [Stale file handle]
02:20 davpostpunk [2016-04-28 07:13:45.225074] E [MSGID: 113037] [posix-handle.c:275:posix_handle_pump] 0-glstvol01-posix: malformed internal link jwplayer.flash.swf for /data/glstvol01/.glusterfs/a1/97/a197ecd5-cb9e-41fb-9a59-6dc865821b42
02:20 davpostpunk [2016-04-28 07:13:45.225091] E [MSGID: 113091] [posix.c:178:posix_lookup] 0-glstvol01-posix: Failed to create inode handle for path <gfid:a197ecd5-cb9e-41fb-9a59-6dc865821b42>
02:20 davpostpunk The message "E [MSGID: 113018] [posix.c:196:posix_lookup] 0-glstvol01-posix: lstat on null failed" repeated 2 times between [2016-04-28 07:13:45.223104] and [2016-04-28 07:13:45.225106]
02:20 davpostpunk [2016-04-28 07:13:45.225119] W [MSGID: 115005] [server-resolve.c:126:resolve_gfid_cbk] 0-glstvol01-server: a197ecd5-cb9e-41fb-9a59-6dc865821b42: failed to resolve (Success)
02:20 davpostpunk [2016-04-28 07:13:45.225679] E [MSGID: 113037] [posix-handle.c:275:posix_handle_pump] 0-glstvol01-posix: malformed internal link jwplayer.flash.swf for /data/glstvol01/.glusterfs/a1/97/a197ecd5-cb9e-41fb-9a59-6dc865821b42
02:20 davpostpunk [2016-04-28 07:13:45.225719] E [MSGID: 113097] [posix-helpers.c:598:posix_istat] 0-glstvol01-posix: Failed to create handle path for a197ecd5-cb9e-41fb-9a59-6dc865821b42/ [Stale file handle]
02:34 SpeeR joined #gluster
02:35 SpeeR why is volume heal volname info healed not supported on gluster 3.7.8?
02:35 SpeeR volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [healed | heal-failed | split-brain] |split-brain {bigger-file <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]}]
02:36 SpeeR or am I doing it wrong? volume heal adds info healed
02:41 bowhunter joined #gluster
02:42 ahino joined #gluster
02:49 MugginsM joined #gluster
02:55 davpostpunk i 'll check it
02:55 davpostpunk SpeerR
02:58 davpostpunk but that is not enough for block the apache servers and tomcats
02:59 davpostpunk i suppose
02:59 davpostpunk what do you think? maybe i need ckeck other log
03:10 hagarth joined #gluster
03:12 davpostpunk you have right SpeeR, i had a inconsistencies with some file, a little one
03:17 SpeeR so it should work? wonder why it's not
03:20 davpostpunk Umm
03:20 davpostpunk sometimes, the gluster server block the apache/tomcat servers
03:22 davpostpunk I saw some blocks messages in the logs, but i think that blocks are caused for the cuota
03:22 davpostpunk quota sorry
03:23 davpostpunk [2016-04-28 07:50:32.618733]  : volume quota glstvol01 list : FAILED : Locking failed on ubuglst003. Please check log file for details.
03:23 davpostpunk [2016-04-28 07:50:34.242378]  : volume quota glstvol01 list : FAILED : Locking failed on ubuglst002. Please check log file for details.
03:23 davpostpunk [2016-04-28 07:50:34.422969]  : volume quota glstvol01 list : FAILED : Locking failed on ubuglst002. Please check log file for details
03:23 davpostpunk [2016-04-28 07:50:15.666717]  : volume quota glstvol01 list : FAILED : Another transaction is in progress for glstvol01. Please try again after sometime.
03:23 davpostpunk [2016-04-28 07:50:15.753729]  : volume quota glstvol01 list : FAILED : Another transaction is in progress for glstvol01. Please try again after sometime.
03:24 davpostpunk what do you think for the messages?
03:24 davpostpunk thanks SpeeR for the faster response
03:25 davpostpunk any idea?
03:26 davpostpunk I forgot, i found some clients with the different glusterfs client version
03:29 kpease joined #gluster
03:34 shubhendu joined #gluster
03:34 davpostpunk 2016-04-27 08:00:15.138313] E [MSGID: 106376] [glusterd-op-sm.c:7553:glusterd_op_sm] 0-management: handler returned: 1
03:34 davpostpunk [2016-04-27 08:00:15.187925] E [MSGID: 106116] [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Locking failed on ubuglst002. Please check log file for details.
03:34 davpostpunk [2016-04-27 08:00:15.188029] E [MSGID: 106151] [glusterd-syncop.c:1864:gd_sync_task_begin] 0-management: Locking Peers Failed.
03:34 davpostpunk [2016-04-27 08:00:15.189090] E [MSGID: 106116] [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Unlocking failed on ubuglst001. Please check log file for detail
03:34 davpostpunk s.
03:34 davpostpunk [2016-04-27 08:00:15.189188] E [MSGID: 106116] [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Unlocking failed on ubuglst003. Please check log file for detail
03:35 davpostpunk s.
03:35 davpostpunk [2016-04-27 08:00:15.191109] E [MSGID: 106116] [glusterd-mgmt.c:135:gd_mgmt_v3_collate_errors] 0-management: Unlocking failed on ubuglst002. Please check log file for detail
03:35 davpostpunk s.
03:39 DV joined #gluster
03:41 MugginsM joined #gluster
03:43 gem joined #gluster
03:45 DV joined #gluster
03:47 ashiq joined #gluster
03:51 kdhananjay joined #gluster
03:54 ashiq joined #gluster
04:01 julim joined #gluster
04:06 RameshN joined #gluster
04:06 MugginsM joined #gluster
04:09 itisravi joined #gluster
04:28 atinm joined #gluster
04:33 shubhendu joined #gluster
04:34 nishanth joined #gluster
04:40 overclk joined #gluster
04:41 poornimag joined #gluster
04:54 poornimag joined #gluster
04:57 gem joined #gluster
04:57 The_Pugilist joined #gluster
05:00 karthik___ joined #gluster
05:08 prasanth joined #gluster
05:11 ndarshan joined #gluster
05:13 mhulsman joined #gluster
05:15 fcoelho joined #gluster
05:17 jiffin joined #gluster
05:21 aravindavk joined #gluster
05:21 Manikandan joined #gluster
05:22 Bhaskarakiran joined #gluster
05:25 Apeksha joined #gluster
05:29 aspandey joined #gluster
05:30 DV joined #gluster
05:32 ppai joined #gluster
05:37 akay if I have files in a distrib-rep volume in the .glusterfs folder with permissions of ------T and no links to them is it safe to delete them? (they're showing up on the volume as split brain)
05:37 glusterbot akay: ----'s karma is now -4
05:38 post-factum akay: yep, as soon as links count == 1
05:39 gowtham joined #gluster
05:39 akay thanks post-factum
05:40 akay does the same go for a file in there with normal permissions too?
05:41 kshlm joined #gluster
05:42 hgowtham joined #gluster
05:42 harish_ joined #gluster
05:44 rafi joined #gluster
05:45 post-factum yes
05:46 level7_ joined #gluster
05:50 Saravanakmr joined #gluster
05:50 spalai joined #gluster
05:52 kotreshhr joined #gluster
05:54 atalur joined #gluster
06:01 skoduri joined #gluster
06:01 vmallika joined #gluster
06:04 JoeJulian SpeeR: I may be mistaken, but I think that command was deprecated in favor of the heal statistics suite.
06:06 JoeJulian davpostpunk: I didn't read any of that log. It's way too much to go into an irc chat. Next time, use a web service for pasting, like fpaste.org or ,,(paste). Those last few lines of errors looks like a locking problem. Maybe related to mismatched versions, maybe not. If it is, make sure your opversion is low enough to be supported by those older clients.
06:06 glusterbot davpostpunk: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
06:08 JoeJulian I've been avoiding it but more and more often I'm thinking I'm going to need to implement flood protection with glusterbot.
06:12 gowtham joined #gluster
06:12 pur joined #gluster
06:19 jtux joined #gluster
06:22 karnan joined #gluster
06:23 harish_ joined #gluster
06:35 arcolife joined #gluster
06:36 ramky joined #gluster
06:38 [Enrico] joined #gluster
06:38 [Enrico] joined #gluster
07:00 atinm joined #gluster
07:05 ashiq joined #gluster
07:06 Logos01 JoeJulian: If you're still 'round, what is your advice / thoughts on the enabling or disabling of Jumbo Frames for glusterfs?
07:10 fsimonce joined #gluster
07:35 kassav joined #gluster
07:42 semiosis joined #gluster
07:42 abyss^ Hi, I get: Unable to self-heal permissions/ownership of '/' (possible split-brain). Please fix the file on all backend volumes and LA pick to 9 (on clients), someone can help me with that?
07:48 atinm joined #gluster
07:48 jri joined #gluster
07:48 mbukatov joined #gluster
07:54 ctria joined #gluster
08:01 Wizek joined #gluster
08:01 gvandeweyer joined #gluster
08:03 gvandeweyer hi, I'm having issues with an upgrade of gluster clients .  We went from ubuntu 10.04 with gluster 3.5.2 (from source) to ubunt14.04 with gluster 3.7.9 (using .deb files).
08:04 gvandeweyer Installation worked, but mounting fails.  The log file mentions "/usr/local/lib/glusterfs/3.5.2/rpc-transport/socket.so" not being available.
08:04 gvandeweyer I uninstalled (make uninstall) and removed all gluster related files I could find from the client machine (locate gluster), before installing.
08:05 gvandeweyer the socket.so file is available under "usr/lib/x86_64-linux-gnu/glusterfs/3.7.9/rpc-transport/socket.so", but not found apparently. any ideas why?
08:05 karthik___ joined #gluster
08:06 gvandeweyer would it be safe to just create a symlink from 3.7.9 libs to 3.5.2 libs?
08:14 ctria joined #gluster
08:17 Gnomethrower joined #gluster
08:18 marbu joined #gluster
08:24 kshlm joined #gluster
08:30 Gnomethrower joined #gluster
08:32 Wizek joined #gluster
08:32 Slashman joined #gluster
08:34 gvandeweyer update: symlinking the lib folders created other errors
08:34 Pupeno joined #gluster
08:41 itisravi joined #gluster
08:49 aravindavk joined #gluster
08:53 post-factum gvandeweyer:
08:53 post-factum gvandeweyer: no
08:53 post-factum gvandeweyer: you must reinstall gluster properly
08:53 post-factum gvandeweyer: it seems you have missed some executables
08:59 gvandeweyer post-factum: I just seem to have fixed it. Installing from the 3.7.9 sources works, while the 3.7.9.deb didn't.
08:59 rouven joined #gluster
08:59 EinstCrazy joined #gluster
08:59 gvandeweyer possibly something ubuntu related then
08:59 post-factum gvandeweyer: check your /usr/local/bin
08:59 gvandeweyer How critical is it anyway that servers and clients run the exact same version?
09:00 puiterwijk left #gluster
09:00 post-factum gvandeweyer: it is preferrable to run the same version in the whole cluster, and if not, at least clients should have newev version
09:00 post-factum *newer
09:00 post-factum using old clients with new servers is insane :)
09:01 post-factum and do not try to mix different branches, like 3.5 and 3.7
09:01 gvandeweyer we currently had that, since the clients wouldn't upgrade (3.5 vs 3.7)
09:02 gvandeweyer hopefully everything will be more stable now that I can upgrade them as well
09:02 post-factum i always upgrade clients first
09:03 kdhananjay joined #gluster
09:06 F2Knight joined #gluster
09:21 kovshenin joined #gluster
09:25 karthik___ joined #gluster
09:34 Pupeno joined #gluster
09:37 jiffin joined #gluster
09:40 RameshN_ joined #gluster
09:48 karnan joined #gluster
09:50 level7 joined #gluster
09:53 nishanth joined #gluster
09:54 EinstCrazy joined #gluster
09:54 shubhendu joined #gluster
09:57 ndarshan joined #gluster
09:59 skoduri_ joined #gluster
10:02 Debloper joined #gluster
10:04 prasanth joined #gluster
10:08 marbu joined #gluster
10:08 [Enrico] joined #gluster
10:08 [Enrico] joined #gluster
10:09 Pupeno joined #gluster
10:18 karnan joined #gluster
10:20 Gnomethrower joined #gluster
10:26 level7_ joined #gluster
10:43 spalai ndevos++
10:43 glusterbot spalai: ndevos's karma is now 26
10:52 karthik___ joined #gluster
10:52 RameshN_ joined #gluster
10:53 ndarshan joined #gluster
10:54 julim joined #gluster
10:56 prasanth joined #gluster
11:01 skoduri joined #gluster
11:03 chirino_m joined #gluster
11:05 Debloper joined #gluster
11:05 shubhendu joined #gluster
11:10 johnmilton joined #gluster
11:12 nishanth joined #gluster
11:24 robb_nl joined #gluster
11:35 kotreshhr joined #gluster
11:38 EinstCrazy joined #gluster
11:41 shubhendu joined #gluster
11:42 ira joined #gluster
11:44 ira joined #gluster
11:46 skoduri joined #gluster
11:46 chirino joined #gluster
11:51 mpietersen joined #gluster
11:52 shyam joined #gluster
11:58 gem joined #gluster
12:03 unclemarc joined #gluster
12:07 mhulsman joined #gluster
12:11 mhulsman1 joined #gluster
12:32 DV joined #gluster
12:34 Pupeno joined #gluster
12:36 russoisraeli joined #gluster
12:36 ashiq joined #gluster
12:44 Pupeno joined #gluster
13:09 shyam joined #gluster
13:17 kdhananjay joined #gluster
13:21 aravindavk joined #gluster
13:27 mhulsman joined #gluster
13:28 Saravanakmr joined #gluster
13:29 bowhunter joined #gluster
13:36 luizcpg joined #gluster
13:44 Saravanakmr joined #gluster
13:44 DV joined #gluster
13:48 ashiq joined #gluster
13:49 skylar joined #gluster
13:53 ninjaryan joined #gluster
13:56 level7 joined #gluster
13:57 Pupeno joined #gluster
14:03 nbalacha joined #gluster
14:07 spalai joined #gluster
14:08 spalai left #gluster
14:08 spalai joined #gluster
14:12 hchiramm joined #gluster
14:17 jiffin joined #gluster
14:21 volga629 joined #gluster
14:21 mpietersen joined #gluster
14:25 john51_ joined #gluster
14:28 jiffin joined #gluster
14:29 ninjaryan joined #gluster
14:31 rideh- joined #gluster
14:31 pfactum joined #gluster
14:34 ashiq joined #gluster
14:35 julim joined #gluster
14:36 wushudoin joined #gluster
14:37 russoisraeli joined #gluster
14:37 kotreshhr joined #gluster
14:37 nishanth joined #gluster
14:37 ppai joined #gluster
14:37 SpeeR joined #gluster
14:37 dgandhi joined #gluster
14:37 jackdpeterson joined #gluster
14:37 Logos01 joined #gluster
14:37 chirino joined #gluster
14:37 JoeJulian Logos01: Jumbo frames are a must-have for storage clusters regardless of the software defining them.
14:41 sc0 joined #gluster
14:45 shyam joined #gluster
14:52 ninjarya1 joined #gluster
14:53 fsimonce joined #gluster
14:57 jiffin joined #gluster
14:59 jackdpeterson Are there any problems with using cachefilesd and NFS mounts w/ GlusterFS? I saw some super old discussions on the topic; however, we found that we needed to switch over to NFS clients yesterday due to insane amount of load being placed on the glusterFS servers w/ the fuse client.
15:00 mpietersen joined #gluster
15:00 jackdpeterson I'm just looking to get any edge we can as far as negative directory lookups go ... as well as caching really frequently accessed [php / images / etc.] files.
15:00 * post-factum is wondering why the hell one could ever want to keep cachefiles on remote FS
15:02 jackdpeterson ^^ extremely high read rate, low file change rate / writes, minimizing network round trips on stat's
15:03 jackdpeterson e.g., is cachefilesd needed when using the fsc mount option
15:10 jiffin joined #gluster
15:11 ctria joined #gluster
15:15 plarsen joined #gluster
15:22 ctria joined #gluster
15:25 russoisraeli joined #gluster
15:26 kdhananjay1 joined #gluster
15:28 spalai joined #gluster
15:29 julim joined #gluster
15:33 bluenemo joined #gluster
15:33 kotreshhr left #gluster
15:36 kpease joined #gluster
15:38 bennyturns joined #gluster
15:39 theron joined #gluster
15:39 theron_ joined #gluster
15:40 squizzi joined #gluster
15:40 theron_ joined #gluster
15:49 ctria joined #gluster
15:51 jri_ joined #gluster
15:52 shyam joined #gluster
15:55 jri joined #gluster
15:58 level7 joined #gluster
16:05 muneerse joined #gluster
16:18 ctria joined #gluster
16:20 shubhendu joined #gluster
16:21 poornimag joined #gluster
16:25 julim joined #gluster
16:29 refj joined #gluster
16:31 timotheus1_ joined #gluster
16:31 refj Will any of the settings found here: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/Small_File_Performance_Enhancements.html
16:31 glusterbot Title: 13.5. Small File Performance Enhancements (at access.redhat.com)
16:31 refj give any noticeable performance improvement?
16:37 theron joined #gluster
16:47 robb_nl joined #gluster
16:52 bfoster joined #gluster
16:53 raginbaji joined #gluster
16:56 bturner joined #gluster
17:10 bowhunter joined #gluster
17:13 spalai joined #gluster
17:23 karnan joined #gluster
17:23 bennyturns joined #gluster
17:33 plarsen joined #gluster
17:39 spalai left #gluster
17:41 jackdpeterson refj, I didn't see any improvements on our end when we were trying out some of those small file performance changes. Your mileage may vary of course.
17:41 prasanth joined #gluster
17:43 mpietersen joined #gluster
17:49 F2Knight joined #gluster
17:53 F2Knight_ joined #gluster
18:07 CyrilPeponnet joined #gluster
18:11 jlp1 joined #gluster
18:16 karnan joined #gluster
18:32 bennyturns joined #gluster
18:42 theron joined #gluster
18:42 gm_ joined #gluster
18:44 F2Knight joined #gluster
18:48 ashiq joined #gluster
18:51 bennyturns joined #gluster
18:52 hagarth joined #gluster
19:02 skylar joined #gluster
19:27 kovsheni_ joined #gluster
19:28 level7 joined #gluster
19:28 nishanth joined #gluster
19:34 rafi joined #gluster
19:56 hagarth joined #gluster
19:58 ghenry joined #gluster
19:58 ghenry joined #gluster
20:06 valkyr1e joined #gluster
20:22 refj jackdpeterson: Thanks. So the conclusion is basically that glusterfs is not suited for effeciently handling small files (let's say a couple of thousand) as it is. Perhaps this should be stated more clearly in the documentation.
20:23 refj jackdpeterson: That last part was not directed at you.
20:34 chirino joined #gluster
20:38 shyam left #gluster
20:39 JoeJulian refj: Or, perhaps, you should define what your need is and just choose the tools that satisfy your design requirements instead of trying to make it sound like it's someone else's fault.
20:42 JoeJulian btw, this is a open source project and the documentation is on github. If you feel you can improve it, please send a pull request.
20:45 dgandhi joined #gluster
20:46 dgandhi joined #gluster
20:47 refj JoeJulian: I'm sorry and you are right of course. My comment was uncalled for. My small files performance tests should have been more thorough, before choosing glusterfs.
20:47 JoeJulian There are things you can do to mitigate some of the performance hits associated with running a clustered filesystem.
20:47 dgandhi joined #gluster
20:48 JoeJulian Do your files change frequently?
20:49 dgandhi joined #gluster
20:49 JoeJulian If they don't, mounting with nfs can give you improved performance as the kernel will cache enough information to avoid some network round trips. This can cause stale metadata but if that's not a concern that might help.
20:49 JoeJulian And, of course, if this is for serving ,,(php) that article provides best practices.
20:49 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
20:50 dgandhi joined #gluster
20:51 refj No, not really. But my colleagues find the performance when writing many small files shocking. So, the problem as I see it is only when new files are written or when many of the files need to be manipulated in some way. Actually the access when using a full path is very fast.
20:51 refj Thanks for the article.
20:51 dgandhi joined #gluster
20:51 refj I'll test those mount options right away.
20:52 JoeJulian Can they alter their application design to avoid closing the files? It's really the lookup() operation that takes the most time.
20:52 refj I think they probably can!
20:53 JoeJulian :+1:
20:54 theron joined #gluster
20:55 nottc joined #gluster
21:01 jackdpeterson @refj we ended up using NFS mounting -- as of yesterday, a recent change due to high load. The servers that I manage are in fact hosting millions of small (php, images, etc) files and performance is fairly reasonable. PHP's performance can be substantially improved using opcache instead of APC.
21:03 JoeJulian really? doesn't that just precompile opcode files? I would have expected the lookup latency to be just as bad on the op files.
21:03 jackdpeterson opcache throws things in ram
21:03 refj jackdpeterson: The reason I want to use the fuse client is because of it's inbuild HA feature. I have considered nfs, but then I don't see the point of using glusterfs.
21:04 jackdpeterson what I'm saying that that relying on clustered storage or remote storage for hosting application code is a bad practice to begin with. I've lived with various forms of network storage for the last 3 years and ALL of the solutions have tradeoffs.
21:04 jackdpeterson The problem isn't targeted at GlusterFS or NFS ... your concept of 'highly available' -- sure, that's true ... but network hops are expensive.
21:05 jackdpeterson so my point is that if you can deploy your code -- and you could host that on GlusterFS, S3, NFS, or whatever is fine ... but then have that code pulled / pushed to a box as it's updated / instantiated.
21:06 jackdpeterson that way you NEVER run the risk of network bottlenecks as load scales, you never run the risk of a maintenance induced outage, and your biggest risk is managed at deployment -- something that you can control from end-to-end as to when and how.
21:06 jackdpeterson So, really -- consider your application design and deployment. then use network storage where appropriate -- storing objects / files like images and content. then scale from there.
21:06 bowhunter joined #gluster
21:09 mpietersen joined #gluster
21:09 jackdpeterson @joeJulien -- 'The Zend OPcache provides faster PHP execution through opcode caching and optimization. It improves PHP performance by storing precompiled script bytecode in the shared memory.'
21:09 jackdpeterson We ran analysis a while back and optimizer + made PHP tolerable  for the most part
21:10 jackdpeterson doing A/b on APC vs OpCache ... APC didn't work well at all due to the stats.... OpCache (optimizer+) became usable.
21:10 ctria joined #gluster
21:11 JoeJulian I stopped running php several years ago for most things. The one thing I do, zoneminder, I use apc. You can turn off stat checking - which I always have. I don't think opcache was viable when I wrote that article though.
21:11 jackdpeterson it got merged into mainline php a little while ago and became the defacto version. APC is effectively defunct/deprecated now
21:12 JoeJulian I should check the source and see if they're still throwing away failed write results.
21:12 jackdpeterson @refj -- if you are using PHP, keep in mind opcache.revalidate_freq (default "2") ... this should be as high as you can tolerate
21:14 refj jackdpeterson: It is not php, the files contain nothing but configuration text, syncing them to local mount points is not a bad idea and I've thought about it. But since the setup only involves a handful of machines I might just go with lsyncd instead.
21:16 jackdpeterson sounds interesting. one other thing -- at least as configuratino text goes ... if you can wrap that in gzip and handle that in your application logic ... you'll save a TON of bandwidth over the wire.
21:16 jackdpeterson I have something similar for another project ... just using S3
21:16 jackdpeterson assuming that tradeoff between compression / decompression makes sense vs network time
21:21 post-factum opcache did the trick with caching for us
21:21 refj the proprietary devices which rely on the configuration files are not smart enough to handle compression / decompression.
21:21 post-factum we use gluster to store php web crap, and without opcache it performs awful
21:24 ctria joined #gluster
21:26 refj The funny thing about it all is that the application and devices do not suffer from the performance impact, as I see it the problem is the frustation some humans feel when interacting with a "high latency" filesystem. Like "This shouldn't take this long, when it so fast when I do it on my local disk."
21:31 bennyturns joined #gluster
21:31 nottc joined #gluster
21:36 JoeJulian configuration files? Why not etcd?
21:37 morbius42 joined #gluster
21:42 morbius42 hello all. i have 2 files that have been showing "Possibly undergoing heal" for over 24 hours now and not sure how to resolve this
21:42 morbius42 logs don't show any errors
21:43 morbius42 wondering if any one could point me in a direction?
21:46 refj JoeJulian: Etcd looks very interesting and I will certainly read some more about it.
21:49 JoeJulian They're also possibly just very active files that are not undergoing heal. I don't have enough time to go into detail about how gluster marks and unmarks files for update at the moment, but that's a possibility.
21:51 morbius42 both files are a VM QCOW2  volumes that are the OS drive.  Low IO, but just in case I stopped the VM and I am still having the issue
22:03 ctria joined #gluster
22:08 DV joined #gluster
22:13 johnmilton joined #gluster
22:24 cliluw joined #gluster
22:24 ctria joined #gluster
22:28 davpostpunk joined #gluster
22:31 johnmilton joined #gluster
23:03 MikeLupe joined #gluster
23:19 davpostpunk Thanks so much JoeJulian, i've so many time i don't enter on IRC channels, eleven years ago more or less, thanks for the info and for the support
23:21 davpostpunk i've clients with old gluster version, only for a sporadic dumps copies from a one site to another, but is very possible you have right with my issue
23:21 davpostpunk thanks so much and i keep in mind your tips with the IRC channels , thanks and great support
23:30 sloop joined #gluster
23:30 sloop joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary