Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-12-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 TrincaTwik damn
00:04 TrincaTwik ]# gluster --version
00:04 TrincaTwik glusterfs 3.7.6 built on Nov  9 2015 15:20:26
00:04 TrincaTwik dame issue
00:04 TrincaTwik same issue
00:04 TrincaTwik hum
00:04 delhage joined #gluster
00:05 zhangjn joined #gluster
00:06 zhangjn joined #gluster
00:07 JoeJulian Let's see what yours is doing differently. Edit /usr/sbin/mount.glusterfs and add "echo $@ > /tmp/foo" and try again. Then cat /tmp/foo and see what options are being passed.
00:09 TrincaTwik wait...
00:09 TrincaTwik damn
00:09 TrincaTwik to you..
00:09 TrincaTwik mount -av
00:09 TrincaTwik it's valid?
00:10 JoeJulian Yes it is, and the fact that mount.glusterfs doesn't interpret it is a bug. Please file a bug report.
00:10 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
00:11 JoeJulian Good job finding a new bug.
00:11 JoeJulian :)
00:11 TrincaTwik dammmmmm
00:12 TrincaTwik thanks
00:12 TrincaTwik my first bug
00:12 TrincaTwik ahahah
00:12 JoeJulian Congratulations! You're not part of the open source community. :D
00:14 TrincaTwik ahahaha
00:17 btpier joined #gluster
00:21 frostyfrog joined #gluster
00:21 frostyfrog joined #gluster
00:22 javi404 joined #gluster
00:24 TrincaTwik Joe
00:24 TrincaTwik already appear
00:24 TrincaTwik to be reported
00:24 corretico joined #gluster
00:25 TrincaTwik gonna close my case https://bugzilla.redhat.co​m/show_bug.cgi?id=1182145
00:25 delhage joined #gluster
00:25 glusterbot Bug 1182145: unspecified, unspecified, ---, bugs, NEW , mount.glusterfs doesn't support mount --verbose
00:25 TrincaTwik yes
00:25 TrincaTwik stupid me
00:26 JoeJulian Meh, don't beat yourself up over it. Duplicate bugs happen all the time.
00:26 klaas joined #gluster
00:28 klaas joined #gluster
00:29 TrincaTwik yeah
00:30 steveeJ joined #gluster
00:30 mswart joined #gluster
00:30 TrincaTwik it take me... 4h of life...
00:30 TrincaTwik bah
00:35 TrincaTwik can I mount with read only option?
00:36 JoeJulian No, that's a volume-wide option.
00:36 delhage joined #gluster
00:37 TrincaTwik Im trying something like... replicated volume, 2 bricks (1 per node), and when 1 go off the mount stays read only
00:38 JoeJulian Sounds like quorum. Are you saying that quorum is never being restored?
00:38 TrincaTwik no, thats is my main goal... everytime one of those two nodes go off... the share stay in read only mode...
00:39 TrincaTwik seems possible?
00:39 JoeJulian Well, you could use quorum but once the missing server comes back, quorum is restored and the volume is no longer read-only.
00:41 TrincaTwik yeah, and you already told me that it's not possible to put one mount in read only... so I can imagine that doing the reverse it's also not possible: when it's read only with that QUORUM stuff I can't put it in write mode
00:41 JoeJulian You could, but it would be a manual process.
00:43 TrincaTwik it's something easy? i'm reading right now QUORUM stuff and it appears to be something related with 3 nodes up
00:44 JoeJulian If quorum isn't met and the volume is read-only but now you want to allow writes, simply disable quorum.
00:44 JoeJulian If the volume is in-quorum and you want to disable writes, set the volume read-only.
00:45 TrincaTwik no no, my objective it's the first one
00:48 XpineX joined #gluster
00:52 TrincaTwik bah
00:53 TrincaTwik it's proceeds with umount of the folder
00:53 TrincaTwik For what I'm reading : gluster volume set all cluster.server-quorum-ratio 51%
00:54 TrincaTwik gluster volume set TEST cluster.server-quorum-type server
00:54 TrincaTwik when node #2 goes off
00:54 TrincaTwik node #1 simply umount
00:54 TrincaTwik the folder
00:54 TrincaTwik I want something like read only
00:55 zhangjn joined #gluster
01:03 delhage joined #gluster
01:03 EinstCrazy joined #gluster
01:06 zhangjn joined #gluster
01:09 atrius joined #gluster
01:12 TrincaTwik yeah!
01:12 TrincaTwik thanks for the tips!
01:12 TrincaTwik configuration done!
01:13 TrincaTwik JoeJulian
01:13 TrincaTwik I love you!
01:13 TrincaTwik now it's time to sleep 2am here and tomorrow it's work day
01:13 TrincaTwik thanks!
01:13 TrincaTwik see you guys tomorrow
01:36 harish joined #gluster
01:36 zhangjn_ joined #gluster
01:40 Lee1092 joined #gluster
02:11 nangthang joined #gluster
03:01 cjellick joined #gluster
03:07 bharata-rao joined #gluster
03:29 akamensk_ joined #gluster
03:30 akamensk_ left #gluster
03:34 nehar joined #gluster
03:38 sakshi joined #gluster
03:39 vmallika joined #gluster
03:42 btpier joined #gluster
03:45 cholcombe joined #gluster
03:50 klaxa joined #gluster
04:02 kanagaraj joined #gluster
04:03 uchojaka joined #gluster
04:08 uchojaka joined #gluster
04:09 uchojaka left #gluster
04:16 dgandhi joined #gluster
04:17 dgandhi joined #gluster
04:18 RameshN joined #gluster
04:19 nishanth joined #gluster
04:19 ppai joined #gluster
04:22 dblack joined #gluster
04:26 jiffin joined #gluster
04:29 zhangjn joined #gluster
04:33 ramteid joined #gluster
04:41 atinm joined #gluster
04:44 shubhendu joined #gluster
04:44 kotreshhr joined #gluster
04:46 Vaelatern joined #gluster
04:51 RameshN joined #gluster
04:57 ashiq joined #gluster
04:58 aravindavk joined #gluster
04:59 Manikandan joined #gluster
05:07 poornimag joined #gluster
05:09 ramteid joined #gluster
05:13 kevein joined #gluster
05:14 ndarshan joined #gluster
05:15 zhangjn joined #gluster
05:26 Apeksha joined #gluster
05:28 zhangjn joined #gluster
05:33 Humble joined #gluster
05:37 nbalacha joined #gluster
05:39 ramky joined #gluster
05:41 skoduri joined #gluster
05:41 rafi joined #gluster
05:41 hgowtham joined #gluster
05:41 pppp joined #gluster
05:47 Apeksha_ joined #gluster
05:49 hchiramm joined #gluster
05:49 hchiramm_ joined #gluster
05:52 kdhananjay joined #gluster
06:00 Norky joined #gluster
06:01 hgowtham joined #gluster
06:02 itisravi joined #gluster
06:07 vmallika joined #gluster
06:14 dusmant joined #gluster
06:17 rjoseph joined #gluster
06:18 spalai joined #gluster
06:30 overclk joined #gluster
06:41 anil joined #gluster
06:43 karnan joined #gluster
07:06 gildub joined #gluster
07:06 atalur joined #gluster
07:13 kayn joined #gluster
07:13 aravindavk joined #gluster
07:20 kevein joined #gluster
07:21 jtux joined #gluster
07:23 aravindavk joined #gluster
07:45 fsimonce joined #gluster
08:00 zhangjn joined #gluster
08:07 wnlx joined #gluster
08:10 [Enrico] joined #gluster
08:11 ivan_rossi joined #gluster
08:12 davidself joined #gluster
08:13 davidself left #gluster
08:21 mhulsman joined #gluster
08:29 mobaer joined #gluster
08:29 klaxa joined #gluster
08:34 jwd joined #gluster
08:39 skoduri joined #gluster
08:40 ashiq joined #gluster
08:45 ctria joined #gluster
08:50 dusmant joined #gluster
08:50 glafouille joined #gluster
08:51 JesperA joined #gluster
08:57 joj[] joined #gluster
09:00 ocramuias joined #gluster
09:00 ocramuias Hello
09:00 glusterbot ocramuias: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:01 ocramuias joined #gluster
09:01 ppai joined #gluster
09:02 ocramuias When remove a brick next "need" start rebalance ? i need remove 10 bricks i need remove and rebalance per any brick or i can remove all bricks and next rebalance ?
09:02 Norky joined #gluster
09:06 ocramuias joined #gluster
09:06 Norky joined #gluster
09:06 arcolife joined #gluster
09:06 hos7ein joined #gluster
09:14 ocramuias joined #gluster
09:21 dusmant joined #gluster
09:21 ocramuias1 joined #gluster
09:22 Saravana_ joined #gluster
09:22 Saravanakmr joined #gluster
09:26 armyriad joined #gluster
09:26 Norky joined #gluster
09:31 nangthang joined #gluster
09:32 [Enrico] joined #gluster
09:53 [Enrico] joined #gluster
09:57 zhangjn joined #gluster
10:08 ctria joined #gluster
10:08 Slashman joined #gluster
10:10 hos7ein joined #gluster
10:13 ctria joined #gluster
10:17 JVieira joined #gluster
10:18 JVieira Hi can anynone tell me if i set up iscsi targets on both gluster nodes and set up initiatiors with multiple connections that this will provide proper HA?
10:20 hos7ein joined #gluster
10:26 JVieira guys? anynone?
10:30 hos7ein joined #gluster
10:45 hos7ein joined #gluster
10:46 glafouille joined #gluster
10:46 dusmant joined #gluster
10:50 JVieira anybody around?
10:50 [Enrico] joined #gluster
11:05 anoopcs JVieira, I guess you have gone through https://gluster.readthedocs.org/en/latest​/Administrator%20Guide/GlusterFS%20iSCSI/ as an initial reference.
11:05 glusterbot Title: GlusterFS iSCSI - Gluster Docs (at gluster.readthedocs.org)
11:09 ramky joined #gluster
11:10 JVieira yes anoopcs
11:11 JVieira in fact i have it working now, my only question is if i set up conenctions from my MS server to the glustersfs nodes if the data will be stored properly and not cause data corruption?
11:12 badone joined #gluster
11:12 JVieira as im not sure how glsterfs "replication" will work hence the iscsi target will be IO the same ".bin" file...
11:13 Apeksha joined #gluster
11:16 overclk joined #gluster
11:24 cyberbootje joined #gluster
11:35 SunnyB joined #gluster
11:42 kkeithley1 joined #gluster
11:49 ppai joined #gluster
11:49 zhangjn joined #gluster
11:51 atinm REMINDER: Gluster community weekly meeting to start in ~10 minutes
11:51 EinstCrazy joined #gluster
11:55 mlncn joined #gluster
12:10 [Enrico] joined #gluster
12:17 ocramuias I need remove 10 bricks, i need rebalance per all brick removed or i can rebalance after ?
12:17 hos7ein joined #gluster
12:18 fsimonce joined #gluster
12:33 Saravana_ joined #gluster
12:39 arcolife joined #gluster
12:43 kanagaraj joined #gluster
12:45 jmarley joined #gluster
12:54 ira joined #gluster
12:54 SOLDIERz joined #gluster
12:59 kanagaraj joined #gluster
13:01 julim joined #gluster
13:03 ppai joined #gluster
13:06 Lee1092 joined #gluster
13:08 jmarley joined #gluster
13:10 nbalacha joined #gluster
13:18 spalai left #gluster
13:18 d0nn1e joined #gluster
13:22 legreffier joined #gluster
13:22 legreffier hello all :)
13:23 legreffier I can't seem to find a way to display quota informations in non-human readable format.
13:24 legreffier and since it's for parsing those values in python, i thought there might be a py library to do it directly...
13:24 legreffier any input on these ? found nothing in manuals
13:31 jwaibel joined #gluster
13:32 btpier joined #gluster
13:42 B21956 joined #gluster
13:46 unclemarc joined #gluster
13:50 shaunm joined #gluster
13:50 illogik joined #gluster
13:57 plarsen joined #gluster
14:00 kovshenin joined #gluster
14:08 dusmant joined #gluster
14:10 haomaiwa_ joined #gluster
14:11 nangthang joined #gluster
14:17 aravindavk joined #gluster
14:32 bennyturns joined #gluster
14:36 dgandhi joined #gluster
14:45 lpabon joined #gluster
14:47 kotreshhr left #gluster
14:50 hamiller joined #gluster
14:59 shyam joined #gluster
14:59 klaxa|work joined #gluster
15:07 cjellick joined #gluster
15:08 cholcombe joined #gluster
15:30 rwheeler joined #gluster
15:31 nerdcore joined #gluster
15:32 nerdcore I'm using Gluster 3.7 and reviewing this documentation on small file performance enhancements but unsure what to do to actually implement any of these ideas. Thoughts? http://www.gluster.org/community/documentati​on/index.php/Features/Feature_Smallfile_Perf
15:37 anti[Enrico] joined #gluster
15:41 glafouille joined #gluster
15:52 ayma joined #gluster
15:53 shubhendu joined #gluster
15:54 nerdcore I'm attempting to use GlusterFS 3.7 to host a PHP web site with many small files tracked by git (also many small files). Performance is poor and I'm looking for direction on addressing this issue
15:56 shyam nerdcore: profile output of your workload would help in determining the nature of the workload, and hence the optimizations that are required
15:58 haomaiwa_ joined #gluster
15:59 haomaiwang joined #gluster
16:00 kkeithley_ PHP is notorious for stat()ing every PHP include. stat calls are lookups in Gluster.  A lookup is a relatively expensive operation.  Google around, there are some good tips for how to tune gluster for  hosting PHP-based web sites
16:00 kkeithley_ @php
16:00 nerdcore shyam: sounds good. How do I start doing that? :)
16:00 haomaiwa_ joined #gluster
16:00 glusterbot kkeithley_: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
16:00 nerdcore kkeithley_: will do thx
16:01 haomaiwang joined #gluster
16:01 shyam nerdcore: kkeithley_: If PHP is notorious for lookups, the using the lookup-optimize switch should help with -ve lookup performance (assuming that is the problem ;) )
16:02 nerdcore shyam: I'm very new to GlusterFS. Where do enable such a switch?
16:02 shyam nerdcore: one sec... finding the right documentation for the same
16:03 shyam nerdcore: Use this doc to profile, https://gluster.readthedocs.org/en/latest/A​dministrator%20Guide/Monitoring%20Workload/
16:03 glusterbot Title: Monitoring Workload - Gluster Docs (at gluster.readthedocs.org)
16:09 ayma joined #gluster
16:13 nerdcore thanks! Juggling a few tasks ATM but I'm eager to look into this some time today and tomorrow
16:14 jiffin joined #gluster
16:20 skoduri joined #gluster
16:28 plarsen joined #gluster
16:29 calavera joined #gluster
16:35 overclk joined #gluster
16:37 Manikandan joined #gluster
16:38 Slashman joined #gluster
16:40 haomaiwa_ joined #gluster
16:41 ivan_rossi left #gluster
16:47 skylar joined #gluster
16:47 rjoseph joined #gluster
16:52 ju5t joined #gluster
16:52 ju5t hi, if we have an existing gluster set up with two bricks of 1TB each and we want to expand it, do you have to expand it with bricks of the same size like you generally do with a raid set?
16:53 ju5t or is gluster aware of the available disk space on each brick?
16:58 nerdcore ju5t: I was able to increase the space on my Gluster volume yesterday by increasing the size of the underlying filesystems on each brick. I'm using a replica setup and simply took each brick offline and increased its underlying FS size and brought it back online. Once all bricks were enlarged the gluster volume was automagically larger
16:59 ju5t nerdcore: we need to expand the volume with two additional bricks, but i assume it's not going to be a problem since we will create a new replica set of 2 and add that into the volume
17:00 msvbhat ju5t: Yes, you can add more bricks to expand the volume.
17:01 kovshenin joined #gluster
17:01 ju5t msvbhat: i know, i'm wondering though if the replica set needs to be of the same size as the original one
17:01 haomaiwa_ joined #gluster
17:03 vmallika joined #gluster
17:04 msvbhat ju5t: Need not be
17:05 msvbhat ju5t: The new replica set can be of bigger size
17:07 ju5t ok, that sounds good, thanks
17:14 owlbot joined #gluster
17:16 owlbot` joined #gluster
17:17 kotreshhr joined #gluster
17:18 dblack joined #gluster
17:19 ju5t joined #gluster
17:38 XpineX joined #gluster
17:41 mlncn joined #gluster
17:55 F2Knight joined #gluster
18:01 haomaiwa_ joined #gluster
18:06 skylar joined #gluster
18:09 mhulsman joined #gluster
18:12 arcolife joined #gluster
18:14 Rapture joined #gluster
18:15 deniszh joined #gluster
18:22 Manikandan joined #gluster
18:31 nerdcore Is GlusterFS known to perform better on a certain underlying FS type such as XFS? Should I prefer XFS over Ext4 for some specific reasons?
18:36 Jmainguy xfs is new and shiny, it must be better
18:36 Jmainguy its just gotta be
18:36 Jmainguy they say to use xfs in all the docs so I do, I never really checked into why
18:38 Amun_Ra xfs can get corrupt on unclean shutdown
18:38 Amun_Ra famous Input/Output error, been there done that
18:39 Amun_Ra rare guest but still
18:53 kotreshhr joined #gluster
18:53 marlinc joined #gluster
18:55 nerdcore Jmainguy: AFAIK the docs refer almost exclusively to XFS because it is the default FS in RHEL systems ;)
18:56 JoeJulian legreffier: Try --xml
18:59 JoeJulian Amun_Ra: How long ago was that? xfs got a lot of hate a decade ago, but the code base is cleaner (imho) and has left all its old baggage behind long ago.
18:59 Amun_Ra JoeJulian: at least 5+
19:00 Amun_Ra JoeJulian: it could be closer to 10 than to 5
19:01 haomaiwa_ joined #gluster
19:01 hagarth joined #gluster
19:02 chirino joined #gluster
19:09 cjellick joined #gluster
19:14 gzcwnk joined #gluster
19:16 mobaer joined #gluster
19:18 mmckeen joined #gluster
19:38 rafi joined #gluster
19:47 RedW joined #gluster
20:01 haomaiwa_ joined #gluster
20:01 kotreshhr left #gluster
20:16 gildub joined #gluster
20:33 josh joined #gluster
20:33 josh left #gluster
20:36 mhulsman joined #gluster
21:01 64MAACTMX joined #gluster
21:11 turkleton joined #gluster
21:12 turkleton Hey folks. Is anyone particularly familiar with GlusterNFS? I have a two node replicated brick set up, and we're testing automated recovery. When one of the nodes goes away, the nodes mounting the still up GlusterNFS node's export seem to get interrupted. Any idea why?
21:18 JoeJulian turkleton: When a server "goes away", as in it doesn't get shut down and close its TCP connections, the clients don't know that it's gone and wait for ,,(ping-timeout) for it to return. The nfs service is also a client so my suspicion is that you're seeing that wait.
21:18 glusterbot turkleton: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
21:20 turkleton We're building out a two node replicated brick Gluster storage layer at AWS, and we're working to ensure fault-tolerance in the event of a failure with full automated recovery. Due to the nature of FUSE filesystems being slow, we're using the built-in NFS server.
21:20 turkleton I was hoping that when one storage node went away, the other storage node would continue to export GlusterNFS to the nodes in the corresponding AZ. Is this an incorrect assumption?
21:25 JoeJulian No, it's not.
21:26 turkleton So, my assumption is correct that GlusterNFS should stay up on the living storage node?
21:26 JoeJulian Yes
21:26 turkleton Hmm. Any idea why it went away?
21:27 turkleton I'll test again and watch the logs to see what specifically is happening
21:30 mhulsman joined #gluster
21:33 JoeJulian Right, check the nfs log. Also, "went away" is somewhat ambiguous, which is why I did all that typing up there to describe something that I guessed it might mean.
21:37 turkleton Gotcha. I'll check the NFS log. I see some stale file handles and also noticed that Gluster was dirty, running a heal first.
21:37 JoeJulian Which version are you running?
21:38 jmarley joined #gluster
21:43 turkleton 3.6.6
21:58 mlncn_ joined #gluster
22:01 haomaiwa_ joined #gluster
22:03 turkleton Wrapping up to go eat, and then heading to Trans-Siberian Orchestra. I'll probably check out some of the Gluster auto recovery stuff again when I get home. If not, I'll work on it first thing in the morning to see if NFS failing is systemic or one-off.
22:07 mlncn joined #gluster
23:01 haomaiwa_ joined #gluster
23:32 haomaiwa_ joined #gluster
23:35 cristov joined #gluster
23:54 sc0 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary