Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 bala joined #gluster
00:25 MacWinner joined #gluster
00:25 davemc joined #gluster
00:25 suliba joined #gluster
00:25 side_control joined #gluster
00:25 mat1010 joined #gluster
00:25 glusterbot joined #gluster
00:25 tty00 joined #gluster
00:25 rturk|afk joined #gluster
00:25 hybrid512 joined #gluster
00:25 ackjewt joined #gluster
00:25 skippy joined #gluster
00:25 frankS2 joined #gluster
00:25 Andreas-IPO joined #gluster
00:25 johnmwilliams__ joined #gluster
00:25 johnmark joined #gluster
00:25 Diddi joined #gluster
00:25 abyss^^ joined #gluster
00:25 juhaj joined #gluster
00:25 dockbram joined #gluster
00:25 urban joined #gluster
00:25 tomased joined #gluster
00:25 Rydekull joined #gluster
00:26 doekia joined #gluster
00:26 MugginsM joined #gluster
00:26 harish joined #gluster
00:26 sijis joined #gluster
00:26 UnwashedMeme joined #gluster
00:26 ccha joined #gluster
00:26 cmtime joined #gluster
00:26 Kins joined #gluster
00:26 n1x0n joined #gluster
00:26 sickness joined #gluster
00:26 eightyeight joined #gluster
00:26 morse joined #gluster
00:26 frankS2 joined #gluster
00:28 chirino joined #gluster
01:07 chirino joined #gluster
01:09 rjoseph joined #gluster
01:10 kshlm joined #gluster
01:18 meghanam joined #gluster
01:43 kdhananjay joined #gluster
01:46 kdhananjay1 joined #gluster
01:59 shubhendu joined #gluster
02:16 calisto joined #gluster
02:24 calisto yoo this the only problem? what's type  file are this?
02:25 calisto and the propose  of the file?
02:27 durzo joined #gluster
02:28 durzo can gluster-client 3.4.4 mount a gluster server of 3.5.2 ?
02:29 durzo wondering if i can upgrade my backend first, frontend later rather than bring the whole thing offline (i.e rolling upgrade)
02:29 MugginsM I did that 3.3->3.4 and it was fine
02:29 MugginsM don't know about 3.4->3.5
03:04 kshlm joined #gluster
03:20 David_H__ joined #gluster
03:28 David_H_Smith joined #gluster
03:38 David_H__ joined #gluster
03:46 harish joined #gluster
03:46 bala joined #gluster
03:47 fubada joined #gluster
03:47 fubada hi purpleidea
03:52 AaronGr joined #gluster
03:55 David_H_Smith joined #gluster
03:56 purpleidea fubada: o hai http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
03:57 fubada heh
03:57 purpleidea fubada: sup
03:57 itisravi joined #gluster
03:57 fubada so just wanted to let ya know that puppet-gluster is working, although im hitting stuff in the FAQ quiet a bit
03:57 fubada invalid relationship and missing facts and such
03:57 fubada but i think I finally gotit
03:58 fubada how would you create a 2x2 gluster? count => 2 and replica => 2?
03:59 fubada that created Number of Bricks: 4 x 2 = 8
03:59 fubada ;/
04:00 purpleidea fubada: wait what?
04:01 fubada im trying to use gluster::simple to create a single volume in a 2x2 across 4 gluster nodes
04:01 fubada im doing count => 2 and replica => 2, which resulted in a 4 x 2
04:01 purpleidea fubada: four hosts?
04:01 fubada yes
04:02 purpleidea paste your gluster::simple class and the output from gluster volume status please
04:02 purpleidea (no 'please' in the command) ;)
04:03 fubada what Id do by hand typically in my env is: 'gluster volume create reports replica 2 transport tcp gls001:/appdata/bricks/reports gls002:/appdata/bricks/reports' and then 'gluster volume add-brick reports gls003:/appdata/bricks/reports gls004:/appdata/bricks/reports', which would give me a 2x2
04:03 fubada one sec
04:04 fubada https://gist.github.com/aamerik/402061312ed47386ee6b
04:04 glusterbot Title: gist:402061312ed47386ee6b (at gist.github.com)
04:06 purpleidea fubada: count => 2 means two bricks per host. so 2x4hosts = 8
04:06 purpleidea fubada: looks like it is working as designed
04:06 fubada okay, i agree. do you know how I can create 2x2
04:06 purpleidea fubada: two hosts?
04:07 fubada thanks :P
04:07 purpleidea yw
04:07 purpleidea @next
04:07 glusterbot purpleidea: Error: You must be registered to use this command. If you are already registered, you must either identify (using the identify command) or add a hostmask matching your current hostmask (using the "hostmask add" command).
04:07 purpleidea ,,(next)
04:07 fubada great module btw I really appreciate it
04:07 purpleidea fubada: no problem :) thanks
04:08 glusterbot Another satisfied customer... NEXT!
04:08 purpleidea JoeJulian: glusterbot is a bit slow today :P
04:10 smohan joined #gluster
04:18 saurabh joined #gluster
04:28 spandit joined #gluster
04:31 JoeJulian @reconnect
04:31 glusterbot joined #gluster
04:35 rjoseph joined #gluster
04:37 deepakcs joined #gluster
04:38 aravindavk joined #gluster
04:39 lalatenduM joined #gluster
04:42 rafi1 joined #gluster
04:43 Rafi_kc joined #gluster
04:45 nbalachandran joined #gluster
04:46 anoopcs joined #gluster
04:48 jiffin joined #gluster
04:49 ramteid joined #gluster
04:53 meghanam joined #gluster
04:53 meghanam_ joined #gluster
04:55 ppai joined #gluster
05:23 prasanth_ joined #gluster
05:26 David_H_Smith joined #gluster
05:27 David_H_Smith joined #gluster
05:39 atalur joined #gluster
05:46 lalatenduM joined #gluster
06:01 Darakian joined #gluster
06:01 Darakian hello, anyone home?
06:01 rgustafs joined #gluster
06:01 Darakian Well, if anyone gets this there's a typo in the admin guide
06:02 Darakian page 52 here: http://gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf
06:02 Darakian second code block reads
06:02 Darakian gluster volume replace-brick test-volume server3:/exp3 server5:exp5 pause
06:02 Darakian Replace brick pause operation successful
06:02 Darakian should read: gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 pause
06:03 Darakian not gonna open a bug on this because I don't have a bugzilla account and I meh
06:03 Darakian but figured I should drop it somewhere
06:04 kumar joined #gluster
06:07 atinmu joined #gluster
06:07 nshaikh joined #gluster
06:07 kdhananjay joined #gluster
06:10 aravindavk joined #gluster
06:16 rgustafs joined #gluster
06:16 bkolden joined #gluster
06:18 hagarth joined #gluster
06:18 cjanbanan joined #gluster
06:18 David_H_Smith joined #gluster
06:20 RaSTar joined #gluster
06:23 aravindavk joined #gluster
06:31 JoeJulian Darakian: Thanks.
06:31 JoeJulian @rtfm
06:32 glusterbot JoeJulian: Read the fairly-adequate manual at http://gluster.org/community/documentation//index.php/Main_Page
06:32 JoeJulian hmm
06:32 JoeJulian That's not what I was looking for...
06:32 JoeJulian @forget rtfm
06:32 glusterbot JoeJulian: The operation succeeded.
06:34 Fen2 joined #gluster
06:35 JoeJulian @learn rtfm as Read the fairly-adequate manual at https://github.com/gluster/glusterfs/tree/master/doc/admin-guide/en-US/markdown
06:35 glusterbot JoeJulian: The operation succeeded.
06:35 atalur joined #gluster
06:35 JoeJulian for me: file a bug
06:35 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
06:42 Philambdo joined #gluster
06:51 nellemann left #gluster
06:53 ctria joined #gluster
06:53 glusterbot New news from newglusterbugs: [Bug 1155421] Typo in replace-brick...pause example <https://bugzilla.redhat.com/show_bug.cgi?id=1155421>
06:58 soumya joined #gluster
07:05 hchiramm_ joined #gluster
07:05 Slydder joined #gluster
07:06 Slydder morning all
07:11 ppai joined #gluster
07:11 hchiramm_ joined #gluster
07:12 n1x0n left #gluster
07:15 cjanbanan joined #gluster
07:17 soumya joined #gluster
07:19 David_H_Smith joined #gluster
07:19 ekuric joined #gluster
07:38 ricky-ticky1 joined #gluster
08:03 Slashman joined #gluster
08:04 anands joined #gluster
08:05 cjanbanan joined #gluster
08:06 Debolaz joined #gluster
08:07 Debolaz I'm having problems with GlusterFS processes not being very stable... Over a period of like say a month, they tend to die off, and do not restart unless I reboot the node several times (!)
08:08 ppai joined #gluster
08:09 liquidat joined #gluster
08:09 Slydder joined #gluster
08:20 David_H_Smith joined #gluster
08:21 Slydder ndevos: semiosis: take a look at this and tell me what you think: https://dpaste.de/4Pkd
08:21 glusterbot Title: dpaste.de: Snippet #288109 (at dpaste.de)
08:23 ndevos Slydder: not sure what you're after, but you do a lot of LOOKUPs :)
08:24 Slydder just added the apc config
08:24 Slydder apc.stat is off
08:25 Slydder yeah. but how to kill the lookups. was thinking about cachefs but not sure if it will work with glusterfs.fuse
08:26 ndevos cachefs is not available for fuse yet, it is a work in progress
08:26 Slydder and if it works will it help with the lookups. not sure on that point
08:26 ndevos no, I doubt it helps with that too
08:27 ndevos apc is related to webservers with php, right?
08:27 Slydder yeah
08:28 ndevos I guess you have seen the ,,(php) blog post?
08:28 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
08:28 glusterbot --fopen-keep-cache
08:30 Slydder maybe fopen-keep-cache could help there.
08:32 elico joined #gluster
08:35 ndevos well, and the entry-timeout
08:38 Philambdo joined #gluster
08:40 mbukatov joined #gluster
08:52 Debolaz Hmm... I need to configure some sort of monitor that alerts me when problems occur with GlusterFS.... Problem is, GlusterFS reports a lot of things, and some are completely irrelevant like "file healed on node blah", and some are a little more relevant, like "brick wont start". But doesn't really give any obvious way to hook into the important messages... Are there any tutorials on this?
08:53 Philambdo joined #gluster
08:53 Debolaz Actually, having an easy way to get *important* messages from any of the subsystems would be nice.
08:56 Debolaz And the messages I do sometimes feels a bit contradictory... gluster volume status on any node says the brick for node X is offline. The brick log for node X says its up and successfully received connections from all other nodes.
09:00 vimal joined #gluster
09:01 kdhananjay joined #gluster
09:02 tryggvil joined #gluster
09:04 giannello joined #gluster
09:12 Slydder joined #gluster
09:16 Slydder ndevos: do you have any info on how to set fopen-keep-cache when mounting?
09:18 keycto joined #gluster
09:19 hagarth Debolaz: have you looked at nagios-gluster plugins?
09:19 Debolaz hagarth: No, I guess I'll have to. :)
09:20 Debolaz Hrmm... Seems like todays issue is that glusterfsd correctly starts on 49152, but is then started again and fails because it's already started, notifying the rest of the system that it couldn't start, making it look offline.
09:21 ndevos Slydder: I dont know that directly, check if /sbin/mount.glusterfs (its a script) has an option for that
09:23 David_H_Smith joined #gluster
09:24 tryggvil joined #gluster
09:26 Debolaz I get the feeling the cause of many of the problems I'm having with glusterfs is that glusterd isn't able to reliably start and monitor the various daemons... Can the various daemons be run and monitored from a third party monitoring tool?
09:26 haomaiwang joined #gluster
09:27 rgustafs joined #gluster
09:30 Slydder ndevos: strange. trying to mount glusterfs fuse with fuse-opt to pass fopen-keep-cache just does not wish to work.
09:31 haomaiwa_ joined #gluster
09:32 haomaiw__ joined #gluster
09:32 haomaiw__ joined #gluster
09:42 haomaiwang joined #gluster
09:54 glusterbot New news from newglusterbugs: [Bug 1152956] duplicate entries of files listed in the mount point after renames <https://bugzilla.redhat.com/show_bug.cgi?id=1152956> || [Bug 1152957] arequal-checksum mismatch between before and after successful heal on a replaced disk <https://bugzilla.redhat.com/show_bug.cgi?id=1152957>
10:00 nshaikh joined #gluster
10:15 ndevos Slydder: I dont know, I never really looked at that - but extending mount.glusterfs is pretty straight forward incase some option is not available yet
10:21 David_H_Smith joined #gluster
10:27 Slydder ndevos: the script seems to offer the option and doesn't throw an error when used. However, once mounted the options are not shown when looking at the option enabled once mounted.
10:28 ndevos Slydder: you mean, the option is not listed in the output of 'ps'?
10:30 nbalachandran joined #gluster
10:36 asku joined #gluster
10:46 haomaiwang joined #gluster
10:47 hagarth joined #gluster
10:58 diegows joined #gluster
11:01 ira joined #gluster
11:02 ira joined #gluster
11:07 calisto joined #gluster
11:12 calisto ping yoo
11:12 LebedevRI joined #gluster
11:15 virusuy joined #gluster
11:16 calisto hi virusuy...You have any experience with GlusterFS?
11:16 virusuy calisto: yeap :-)
11:17 bala joined #gluster
11:18 ppai joined #gluster
11:21 David_H_Smith joined #gluster
11:28 rtalur_ joined #gluster
11:31 tryggvil joined #gluster
11:31 Slydder ndevos: mounting so: /usr/sbin/glusterfs --fopen-keep-cache --direct-io-mode=enable --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --volfile-server=10.1.5.3 --volfile-id=/nvwh2 /mnt/nvwh2
11:32 Slydder everything but direct-io-mode and fopen-keep-cache pretty much makes the mount useless.
11:37 sahina joined #gluster
11:43 ndevos Slydder: what is the reason to use direct-io-mode? direct-io usually tries to bypass any caches...
11:43 jbrooks joined #gluster
11:44 rjoseph joined #gluster
11:45 Slydder ndevos: still working on optimizing it. not saying that direct io is going to stay.
11:49 soumya__ joined #gluster
12:00 jdarcy joined #gluster
12:00 Fen1 joined #gluster
12:01 bennyturns joined #gluster
12:03 freemanbrandon joined #gluster
12:04 meghanam joined #gluster
12:04 meghanam_ joined #gluster
12:06 rjoseph joined #gluster
12:07 anands joined #gluster
12:10 haomaiwang joined #gluster
12:13 edward1 joined #gluster
12:13 humnew joined #gluster
12:21 _dist joined #gluster
12:21 David_H_Smith joined #gluster
12:21 hollaus joined #gluster
12:26 tryggvil joined #gluster
12:30 tryggvil joined #gluster
12:36 mojibake joined #gluster
12:37 jmh joined #gluster
12:39 rafi1 joined #gluster
12:40 Guest39467 joined #gluster
12:43 topshare joined #gluster
12:44 anands joined #gluster
12:44 hollaus left #gluster
12:51 chirino joined #gluster
12:53 smohan joined #gluster
12:55 diegows joined #gluster
13:04 Debolaz joined #gluster
13:04 calum_ joined #gluster
13:04 theron joined #gluster
13:05 virusuy joined #gluster
13:05 virusuy joined #gluster
13:10 michaellotz joined #gluster
13:11 michaellotz hello
13:11 glusterbot michaellotz: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:13 michaellotz i want to use xattr with selinux through the mountpoint. i use the mountoption "selinux", but i cant set the secure xattr.
13:13 michaellotz etxattr("/mnt/foo/bar/test.txt", "security.selinux", "unconfined_u:object_r:user_home_dir_t:s0", 41, 0) = -1 EOPNOTSUPP (Operation not supported)
13:13 bala1 joined #gluster
13:14 michaellotz mountpoint is /mnt and the bricks are ext4 (rw,noatime,user_xattr,acl) mounted
13:16 michaellotz why is the security extension not supported?
13:18 tryggvil joined #gluster
13:19 michaellotz glusterfs 3.5.2 on rhel6.5
13:20 freemanbrandon joined #gluster
13:20 _dist michaellotz: we run 3.5.2 to export a file server over smb, however I never fought with selinux
13:21 David_H_Smith joined #gluster
13:23 michaellotz _dist: smb is not an god option for me. i want to use it in unix enviroment.
13:24 _dist michaellotz: I understand, sorry I can't be more help, our file host is ubuntu so no selinux
13:25 topshare joined #gluster
13:25 michaellotz _dist: selinux is not the problem. but setting the secure xattr to a file on mounted gluster volume is the problem
13:26 ndevos michaellotz: so you want to say that bug 1127457 is not the issue you are running into?
13:26 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1127457 high, unspecified, ---, gluster-bugs, NEW , Setting security.* xattrs fails
13:27 michaellotz ndevos: thx, i have a lokk at the bug report...
13:28 ndevos michaellotz: when you mount the volume with the selinux option, 'ps' should show a 'glusterfs' process with --selinux
13:28 julim joined #gluster
13:29 michaellotz right. but it dosent work for me
13:29 michaellotz "/usr/sbin/glusterfs --acl --selinux "
13:30 ndevos and these xattrs can get set on the filesystems that you use as bricks?
13:31 * ndevos would not know why that would fail, it should be pretty standard
13:32 michaellotz i try it
13:34 michaellotz on the brick it works fine: chcon unconfined_u:object_r:user_home_dir_t:s0 test.txt
13:34 michaellotz ls -Z test.txt
13:34 michaellotz -rw-r--r--. root root unconfined_u:object_r:user_home_dir_t:s0 test.tx
13:34 glusterbot michaellotz: -rw-r--r's karma is now -11
13:35 ndevos I really dont know why it does not work then...
13:35 calisto joined #gluster
13:36 ndevos michaellotz: maybe you can add more details about your configuration in that bug?
13:37 michaellotz ndevos: what kind of details can i give?
13:39 michaellotz 2 gluster nodes with distributed replicated store 1 node have 2 bricks. brick 1 of the node 1 ist mirrored with teh brick 1 of the node 2. and so on
13:39 ndevos michaellotz: output of the log, the full 'glusterfs' command from 'ps' and the line from /etc/fstab - maybe capture a tcpdump while trying to set the xattr so that we can see if the operation is done over the network or if it got denied by fuse
13:39 michaellotz the os all the same rhel 6.5
13:39 topshare joined #gluster
13:39 ndevos michaellotz: maybe there is a message in the brick logs too? if so, add that
13:41 msmith joined #gluster
13:41 tdasilva joined #gluster
13:42 msmith_ joined #gluster
13:44 michaellotz ps: /usr/sbin/glusterfs --acl --selinux --volfile-server=node1.gluster.local --volfile-id=/gv0 /mnt
13:45 michaellotz ndevos: fstab: node1.gluster.local:/gv0/mntglusterfsdefaults,acl,selinux0 0
13:46 ndevos michaellotz: yeah, that looks ok to me
13:47 calum_ joined #gluster
13:48 bala joined #gluster
13:54 theron joined #gluster
13:57 sahina joined #gluster
14:00 lpabon joined #gluster
14:00 julim joined #gluster
14:01 lpabon joined #gluster
14:08 jobewan joined #gluster
14:12 Slashman joined #gluster
14:18 ninthBit joined #gluster
14:19 wushudoin joined #gluster
14:19 freemanbrandon joined #gluster
14:22 David_H_Smith joined #gluster
14:23 skippy Anyone using kernel NFSv4 with Gluster?  Is it a good idea to use Gluster-FUSE on a Gluster server and then export that via NFSv4?
14:24 ndevos skippy: no, you probably do not want to go that route, nfs-ganesha provides support for NFSv4 and gluster
14:26 topshare joined #gluster
14:28 hagarth joined #gluster
14:29 rwheeler joined #gluster
14:32 jbrooks joined #gluster
14:33 _dist joined #gluster
14:41 atrius joined #gluster
14:43 kkeithley More than _probably_.  If you export brick volumes with knfs, that's equivalent to writing on the bricks. Which is not not just a bad idea, it's expressly not allowed.
14:44 ndevos well, you *could* export a fuse mountpoint
14:44 ndevos but, that has issues too... there is a README.nfs in the fuse-utils that describes it
14:47 _Bryan_ joined #gluster
14:57 _dist joined #gluster
14:58 calisto joined #gluster
15:06 rjoseph joined #gluster
15:07 skippy I understand how Gluster-FUSE works.  I'm fuzzy on Ganesha.  Do I need to do pNFS to enjoy the "read and write from all bricks as needed" that I get from Gluster-FUSE?
15:17 calisto1 joined #gluster
15:21 David_H_Smith joined #gluster
15:30 msmith__ joined #gluster
15:45 calisto1 joined #gluster
16:03 keycto joined #gluster
16:22 David_H_Smith joined #gluster
16:25 Pupeno joined #gluster
16:31 haomaiwa_ joined #gluster
16:39 lmickh joined #gluster
16:49 msmith_ joined #gluster
16:49 JoeJulian @mount volume
16:49 JoeJulian @mount server
16:49 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
16:49 sputnik13 joined #gluster
16:50 JoeJulian @learn mount server as One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
16:50 glusterbot JoeJulian: The operation succeeded.
16:50 * JoeJulian grumbles something sounding like what the duck...
16:51 msmith__ joined #gluster
16:51 skippy @rrdns
16:51 glusterbot skippy: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
16:53 n-st joined #gluster
16:58 haomaiwa_ joined #gluster
17:00 msmith_ joined #gluster
17:04 davemc joined #gluster
17:06 ninthBit ubuntu 12.04 gluster 3.4.5 from semiosis repo:  i found on my distributed-mirror the two sub mirror volumes were greatly misbalanced on usage.  the first mirror had 70 Gb free and the second mirror has over 200 Gb free.  there are many files and rarely are any over 20 Mb. I understand the distribution can favor one volume over another but i would not expect this amount of difference.  I am trying to find out if the first mirror happens
17:06 ninthBit will executing a rebalance resolve this?
17:11 JoeJulian It may, but I'm not making any promises.
17:11 JoeJulian It will not, of course, make it any worse.
17:12 JoeJulian If your use case creates temp files and renames them, that might cause that.
17:12 JoeJulian If your temp files hash out the same almost every time...
17:13 haomai___ joined #gluster
17:14 ninthBit joejulian: yes, majority of the files are created under a temp name then renamed.  later the files are moved by another process
17:17 ctria joined #gluster
17:18 chirino joined #gluster
17:19 XpineX_ joined #gluster
17:21 David_H_Smith joined #gluster
17:26 JoeJulian ninthBit: brick placement is calculated on a hash of the filename. When the filename changes, the file isn't moved but instead a dht pointer is added to the brick where the new filename would hash to, pointing to the brick that the file is actually on. If you're going to create tempfiles and rename, ensure the tempfile names are at least completely random. That /should/ keep distribution fairly even. The move, though, should fix that so short o
17:26 JoeJulian f a detailed analysis I'm not sure what's happening.
17:27 JoeJulian @lucky dht misses are expensive
17:27 glusterbot JoeJulian: http://joejulian.name/blog/dht-misses-are-expensive/
17:27 JoeJulian ninthBit: Read the first half of that blog article for a deeper understanding of how gluster uses dht.
17:28 David_H_Smith joined #gluster
17:28 morsik better not to use tempfiles in gluster…
17:28 morsik we had problems with those. it's pain in ass in performance…
17:28 sputnik13 joined #gluster
17:29 David_H_Smith joined #gluster
17:29 ninthBit well, not "temp" files like OS files.  a temporary file name given to a file being uploaded via FTP and once complete it is renamed.
17:47 skippy ugh. Ganesha RPMs use libjemalloc.  That requires PyQt. :(
17:47 skippy Not being a developer, is libjemalloc awesome?
17:49 17SAAOLYX joined #gluster
17:52 zerick joined #gluster
18:03 nshaikh joined #gluster
18:08 XpineX__ joined #gluster
18:13 SpeeR joined #gluster
18:14 elico joined #gluster
18:21 kkeithley jemalloc has no dependency on PyQt. What are you seeing that makes you think it does?
18:22 gbrand_ joined #gluster
18:24 kkeithley And jemalloc isn't required per se. The ganesha project decided to use it. The fedora and EPEL packaging followed suit and used it. You can download the source and build it yourself without jemalloc
18:27 elico left #gluster
18:29 JoeJulian kkeithley, ndevos: http://ur1.ca/ihav5 Does the leading {0,1,2}- refer to an graph epoch?
18:29 glusterbot Title: #144291 Fedora Project Pastebin (at ur1.ca)
18:29 longshot902 joined #gluster
18:33 DV joined #gluster
18:33 JoeJulian file a bug
18:33 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:34 julim joined #gluster
18:42 JoeJulian dammit.. that's fixed in 3.5.0 but not backported. :( bug 948178
18:42 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=948178 unspecified, unspecified, ---, rabhat, CLOSED CURRENTRELEASE, do proper cleanup of the graph
19:07 kkeithley JoeJulian: don't know
19:14 kkeithley looking at _gf_log() though, it's this->graph->id.
19:18 kkeithley looks like it might get incremented every time glusterfsd reads its volfile
19:20 andreask joined #gluster
19:23 JoeJulian I guess I'm not doing enough to warrant Red Hat love anymore. :( No funding for LISA.
19:24 JoeJulian Which is dumb. LISA is where Red Hat should be spending way more than they do at developer conferences.
19:44 tryggvil joined #gluster
19:47 calisto1 joined #gluster
19:48 rolfb joined #gluster
19:51 calisto joined #gluster
19:58 kr0w joined #gluster
20:00 kr0w Hey everyone. I was wondering if I could get some help with improving performance on my glusterfs volume
20:00 quique left #gluster
20:01 kr0w I am getting 58 Mb/s write speeds and that seems slow compared to the hardware available
20:01 _dist kr0w: tell us about your volume. underlying FS, volume layout, # of bricks, the way the bricks are built etc
20:01 kr0w correction 58 MB/s
20:02 kr0w The underlying FS is ext3, I am using the volume to store proxmox containers. I have 3 bricks
20:02 kr0w Is there a better FS to be using?
20:02 _dist is the ovlume a 3 way replica?
20:02 _dist volume*
20:03 kr0w Yes
20:03 _dist over 1gbe?
20:03 kr0w We have 5 15k rpm drives stripped as storage that the glusterfs is using
20:03 kr0w Yes, 1gbe
20:04 _dist 58 isn't bad then, your writing brick has to send the data out twice
20:04 kr0w My boss is looking to move to a SAN, but I am thinking gluster should be able to handle our needs.
20:04 _dist 1024/8/2 = 64 megabytes/sec max
20:05 kr0w Yeah, it does max out the ethernet port. We are talking about getting fiber channels and a fiber switch.
20:05 _dist we use 10gbe, it does have a bit more latency but it was much cheaper
20:07 _dist kr0w: our write speeds on our 10gbe san inside of a VM are around 280MB/s on average, pushing mid 300s sometimes. However, that isn't normal throughput just testing throughput
20:07 kr0w Hmm thats a good option. And it would be easier to switch out. Not sure if my nics are 10gbe though so I would have to get a new card anyway.
20:08 kr0w _dist: Yeah, I understand. my 58 was just a test with dd
20:08 _dist kr0w: we bought dual port pcie intels and a cheap netgear prosafe 10gbe "sort of" managed switched with 8 10gbe ports
20:08 _dist cheap == $900, the nics totalled more honestly (about $400 each for three machines)
20:09 kr0w _dist: I also noticed that the cpu on one of my nodes is maxed. (It is currently trying to catchup a mysql slave)
20:09 _dist kr0w: gluster is insane on healing, if it's healing that's normal
20:10 _dist kr0w: once all replicas are "healed" the cpu will drop back down. But, if you're mysql just does that I don't know what to say, haven't seen that myself
20:10 kr0w Ah, can I speed up healing by changing from reset to diff?
20:11 _dist kr0w: maybe, it depends on the kind of activity, for large files sometimes a full is quicker you'll have to experiement
20:11 kr0w Oh no it isn't mysql. It is taking minimal cpu usage. the glusterfsd process is taking it all. We have 2 6 core cpus with hyperthreading and it is maxing it out.
20:12 _dist kr0w: when healing gluster will max 18 of 24 cores on my servers
20:13 kr0w Does anyone here have experience with how to optimize the healing for a mysql environment.
20:14 kr0w That could be problematic since mysql uses large files and a minor change can cause healing often.
20:14 JoeJulian mysql... myisam sucks for I/O. Use innodb. Create 1 innodb file per brick and name them such that they're on different bricks (create files with some index and increment the index until the file is created on the target brick. That's the filename to use in my.cnf.)
20:14 JoeJulian This will allow DHT to handle the sharding and spread your IO.
20:14 kr0w We are using innodb
20:15 JoeJulian Of course that also means do not use file-per-table
20:15 kr0w We just found that glusterfsd is only using one cpu. Maybe we just need to set it to have more threads?
20:15 semiosis JoeJulian: didn't you write that up?
20:15 semiosis a blog post or something?
20:15 JoeJulian I sort-of did, but I never finished it...
20:15 semiosis ahh
20:15 semiosis you should, it's a great idea
20:15 JoeJulian ... the story of my life...
20:16 kr0w JoeJulian: haha. We all get too busy. But if I can make glusterfsd use more than 1 cpu for its processes that could solve a lot of my initial issues until we can get 10gbe or fiber in.
20:17 JoeJulian I think I saw a feature proposal about that. Not sure if it made it in to 3.6.
20:17 kr0w I put performance.io-thread-count to 24 but it appears to not change the glusterfsd process.
20:17 kr0w OH, so glusterfsd is single threaded at the moment?
20:17 JoeJulian There's reasons why it's a difficult nut to crack, apparently.
20:18 kr0w I am sure, its just like distributing jobs. It isn't simple to make sure that they are not duplicating work.
20:18 JoeJulian This thread may be educational: http://comments.gmane.org/gmane.comp.file-systems.gluster.devel/8246
20:19 glusterbot Title: Gluster filesystem developers () (at comments.gmane.org)
20:19 kr0w OK, well that helps a lot. I will need to find out more about the mysql config. Maybe that is why the slaving is taking so long to catch up.
20:20 JoeJulian btw... there should be no "catch up". The client replicates to each replica simultaneously.
20:20 JoeJulian performance.write-behind: off
20:20 JoeJulian That's the only thing I changed on my mysql volume.
20:21 kr0w sorry, I was meaning mysql slave to catch up to mysql master.
20:21 kr0w Ok, let me try that.
20:22 plarsen joined #gluster
20:24 kr0w JoeJulian: Mind if I PV you about the mysql config? Looks like you are the one to talk to about it since you were going to write it, but I don't want to annoy everyone else.
20:29 JoeJulian Nobody will be annoyed. It's quite topical.
20:29 kr0w Ok, I am just wanting to understand a little more about setting up 1 file per brick.
20:29 failshell joined #gluster
20:30 kr0w Currently we have it setup to use one innodb file per table. But that will all have to be replicated to every brick.
20:31 JoeJulian Yeah, that usually sucks because it's usually one table that's busier than others which causes all your IO to pass through that one bottleneck.
20:31 JoeJulian Here's my 12 brick (4x3) innodb config: innodb_data_file_path = ibdata1:640M;ibdata2a:640M;ibdata3:640M;ibdata4b:640M;ibdata5:10M:autoextend
20:33 JoeJulian Those filenames hashed out to each reside on one dht subvolues
20:33 JoeJulian s/volues/volume/
20:33 glusterbot What JoeJulian meant to say was: Those filenames hashed out to each reside on one dht subvolume
20:33 JoeJulian @restart
20:37 firemanxbr joined #gluster
20:42 kr0w JoeJulian: Ok.
20:43 glusterbot joined #gluster
20:45 kr0w JoeJulian: That makes sense. How do you make it so that the innodb file on has 1 per brick? Are you using a distributed volume instead of replicated then?
20:45 JoeJulian distributed-replicated.
20:46 JoeJulian 4 distribute subvolumes, replica 3
20:46 kr0w Ah, I think I am just doing replication, no distributed.
20:56 _zerick_ joined #gluster
20:59 deeville joined #gluster
21:00 deeville Hi folks, is it possible to NFS-mount a subdirectory of a gluster volume?
21:12 Gugge joined #gluster
21:13 glusterbot New news from newglusterbugs: [Bug 1155328] GlusterFS allows insecure SSL modes <https://bugzilla.redhat.com/show_bug.cgi?id=1155328> || [Bug 1155421] Typo in replace-brick...pause example <https://bugzilla.redhat.com/show_bug.cgi?id=1155421>
21:17 Pupeno_ joined #gluster
21:22 badone joined #gluster
21:30 ntillman joined #gluster
21:33 Pupeno joined #gluster
21:35 jbrooks joined #gluster
21:42 russoisraeli joined #gluster
21:43 russoisraeli hello folks. Quick question. How do I know if my replica recovered after failure/disconnect? volume status doesn't really tell if they're synched or not...
21:44 JoeJulian russoisraeli: which version?
21:46 russoisraeli JoeJulian - 3.5.2
21:48 JoeJulian russoisraeli: Ok, then "gluster volume heal $vol info" should tell you what's left to heal.
21:50 russoisraeli JoeJulian - thanks! is heal automatic, or I need to launch it after failure?
21:50 JoeJulian It's automatic
21:50 JoeJulian The only time you would need to force a heal is if a brick was changed without glusterfs knowing about it, like if you manually deleted stuff...
21:51 ntillman If i want to clear out the quota configuration.  Is it safe to remove /var/lib/glusterd/vols/<volume>/quota.conf? (glusterfs 3.5.2)
21:52 JoeJulian I haven't looked at how that code works. I doubt it would pick up the change though.
22:00 ntillman Well the problem I'm having is when I set a limit-usage on a volume it does not appear in 'quota <vol> list' command. Does not seem to be applying.
22:01 ntillman The command also reports successful.
22:08 theron_ joined #gluster
22:14 elico joined #gluster
22:18 Pupeno_ joined #gluster
22:24 elico joined #gluster
22:43 ntillman JoeJulian: removing that file seem to work. I disabled the quota, stop the volume, kill off gluster*. removed the quota.conf. start everything.
22:44 ntillman left #gluster
22:47 sputnik13 joined #gluster
22:48 sputnik13 joined #gluster
22:58 sputnik13 joined #gluster
23:05 falcon006 joined #gluster
23:06 Pupeno joined #gluster
23:26 Pupeno_ joined #gluster
23:27 freemanbrandon joined #gluster
23:28 falcon006 left #gluster
23:37 elico joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary