Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 natgeorg joined #gluster
00:02 Nuxr0 joined #gluster
00:04 bala1 joined #gluster
00:04 khanku_ joined #gluster
00:04 glusterbot` joined #gluster
00:04 tru_tru_ joined #gluster
00:06 sijis joined #gluster
00:06 sijis joined #gluster
00:07 delhage_ joined #gluster
00:08 vu joined #gluster
00:09 foster joined #gluster
00:13 glusterbot joined #gluster
00:15 ninkotech_ joined #gluster
00:15 ThatGraemeGuy joined #gluster
00:19 ghenry joined #gluster
00:37 [o__o] joined #gluster
00:41 recidive joined #gluster
00:55 sputnik13 joined #gluster
01:22 lyang0 joined #gluster
01:33 sputnik13 joined #gluster
01:39 harish__ joined #gluster
02:07 wgao joined #gluster
02:10 haomaiwa_ joined #gluster
02:41 kshlm joined #gluster
02:54 Jacob2 joined #gluster
03:01 ninkotech_ joined #gluster
03:20 bharata-rao joined #gluster
03:26 side_con1rol joined #gluster
03:32 haomaiwa_ joined #gluster
03:33 nbalachandran joined #gluster
03:35 nbalachandran joined #gluster
03:38 haomai___ joined #gluster
04:00 kdhananjay joined #gluster
04:00 jiku joined #gluster
04:02 meghanam joined #gluster
04:02 meghanam_ joined #gluster
04:02 shylesh__ joined #gluster
04:03 RameshN joined #gluster
04:05 atalur joined #gluster
04:18 kanagaraj joined #gluster
04:24 bala joined #gluster
04:26 gildub joined #gluster
04:28 rafi1 joined #gluster
04:28 Rafi_kc joined #gluster
04:29 rafi2 joined #gluster
04:30 RameshN joined #gluster
04:31 anoopcs joined #gluster
04:37 shubhendu joined #gluster
04:38 atinmu joined #gluster
04:38 aravindavk joined #gluster
04:40 nbalachandran joined #gluster
04:45 deepakcs joined #gluster
04:54 nishanth joined #gluster
04:55 spandit joined #gluster
04:56 ramteid joined #gluster
04:58 sputnik13 joined #gluster
04:58 _NiC joined #gluster
04:59 ppai joined #gluster
05:02 side_control joined #gluster
05:14 hagarth joined #gluster
05:28 rjoseph joined #gluster
05:33 glusterbot New news from newglusterbugs: [Bug 1141639] [SNAPSHOT]: output correction in setting snap-max-hard/soft-limit for system/volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1141639>
05:35 RameshN_ joined #gluster
05:54 mariusp joined #gluster
05:54 R0ok_ joined #gluster
05:54 jiffin joined #gluster
05:55 nbalachandran joined #gluster
05:59 R0ok_ joined #gluster
06:05 TvL2386 joined #gluster
06:06 soumya_ joined #gluster
06:06 glusterbot New news from resolvedglusterbugs: [Bug 1115806] [SNAPSHOT]: Attaching a new node to the cluster while snapshot delete was in progress, deleted snapshots successfuly but gluster snapshot list shows some of the the snaps is still present <https://bugzilla.redhat.co​m/show_bug.cgi?id=1115806>
06:28 jtux joined #gluster
06:35 R0ok_ joined #gluster
06:43 ekuric joined #gluster
06:49 harish_ joined #gluster
06:56 lalatenduM joined #gluster
07:03 raghu joined #gluster
07:03 mariusp joined #gluster
07:04 mariusp joined #gluster
07:06 ekuric joined #gluster
07:07 atalur joined #gluster
07:09 dmyers joined #gluster
07:13 MickaTri joined #gluster
07:14 MickaTri left #gluster
07:14 haomaiwa_ joined #gluster
07:17 atinmu joined #gluster
07:24 aravindavk joined #gluster
07:24 ranjan joined #gluster
07:25 RameshN joined #gluster
07:25 RameshN_ joined #gluster
07:26 hagarth joined #gluster
07:29 haomai___ joined #gluster
07:29 glusterbot New news from newglusterbugs: [Bug 1141659] OSX LaunchDaemon plist file should be org.gluster... instead of com.gluster... <https://bugzilla.redhat.co​m/show_bug.cgi?id=1141659> || [Bug 1139103] DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing <https://bugzilla.redhat.com/show_bug.cgi?i
07:44 frankS2 joined #gluster
07:44 frankS2 Hi, does glusterfs have a lock daemon? if so, when did it get it?
07:47 anoopcs joined #gluster
07:55 frankS2 I'm considering glusterfs for a rails application. Does anyone here have experience with that?
07:55 kasturi joined #gluster
07:55 harish_ joined #gluster
07:59 rjoseph joined #gluster
07:59 MickaTri joined #gluster
07:59 mhoungbo joined #gluster
07:59 saurabh joined #gluster
07:59 RaSTar joined #gluster
07:59 Philambdo joined #gluster
07:59 nshaikh joined #gluster
07:59 ndarshan joined #gluster
07:59 bala joined #gluster
07:59 20WABI7EY joined #gluster
07:59 20WABI7CW joined #gluster
07:59 ninkotech_ joined #gluster
07:59 Pupeno joined #gluster
07:59 glusterbot New news from newglusterbugs: [Bug 1141665] OSX LaunchDaemon plist file should be org.gluster... instead of com.gluster... <https://bugzilla.redhat.co​m/show_bug.cgi?id=1141665>
07:59 ninkotech_ joined #gluster
08:00 liquidat joined #gluster
08:05 kshlm joined #gluster
08:05 kdhananjay joined #gluster
08:05 meghanam joined #gluster
08:05 meghanam_ joined #gluster
08:05 shylesh__ joined #gluster
08:05 kanagaraj joined #gluster
08:05 shubhendu joined #gluster
08:05 nishanth joined #gluster
08:05 spandit joined #gluster
08:05 ppai joined #gluster
08:05 jiffin joined #gluster
08:05 soumya_ joined #gluster
08:06 lalatenduM joined #gluster
08:06 raghu joined #gluster
08:06 ekuric joined #gluster
08:06 hagarth joined #gluster
08:06 anoopcs joined #gluster
08:10 atinmu joined #gluster
08:11 aravindavk joined #gluster
08:13 tru_tru joined #gluster
08:21 ninkotech_ joined #gluster
08:26 nbalachandran joined #gluster
08:35 aravindavk joined #gluster
08:36 hagarth joined #gluster
08:43 diegows joined #gluster
08:46 anoopcs joined #gluster
08:49 RameshN__ joined #gluster
08:53 RameshN joined #gluster
08:54 aravindavk joined #gluster
08:59 glusterbot New news from newglusterbugs: [Bug 1141682] The extras/MacOSX directory is no longer needed, and should be removed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1141682> || [Bug 1141683] The extras/MacOSX directory is no longer needed, and should be removed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1141683>
09:11 rgustafs joined #gluster
09:11 vimal joined #gluster
09:23 RameshN__ joined #gluster
09:24 RameshN joined #gluster
09:27 Thilam joined #gluster
09:27 Thilam hi
09:27 glusterbot Thilam: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:41 Thilam I have a problem with my gluster cluster since friday
09:41 Thilam when I print the status I have this :
09:42 Thilam Brick projet1:/glusterfs/projets-brick1/projets         49155   Y       3394
09:42 Thilam Brick projet2:/glusterfs/projets-brick2/projets         49154   Y       2791
09:42 Thilam Brick projet3:/glusterfs/projets-brick3/projets         N/A     Y       2776
09:42 Thilam NFS Server on localhost                                 2049    Y       32395
09:42 Thilam NFS Server on projet3                                   2049    Y       25435
09:42 Thilam NFS Server on projet2                                   2049    Y       12855
09:42 Thilam the port of the third brick isn't set
09:42 Thilam and I really donc understand why
09:43 Thilam I've restarted the service on the brick, and the server itself but it did not change anything
10:08 RaSTar joined #gluster
10:18 gildub joined #gluster
10:25 karnan joined #gluster
10:28 calum_ joined #gluster
10:30 Slashman joined #gluster
10:30 glusterbot New news from newglusterbugs: [Bug 1125312] Disperse xlator issues in a 32 bits environment <https://bugzilla.redhat.co​m/show_bug.cgi?id=1125312> || [Bug 1139170] DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1139170>
10:36 ackjewt joined #gluster
10:40 rafi1 joined #gluster
10:48 RameshN joined #gluster
10:48 gluslog joined #gluster
10:52 RameshN__ joined #gluster
10:56 mariusp joined #gluster
10:56 recidive joined #gluster
10:56 meghanam joined #gluster
10:57 meghanam_ joined #gluster
10:57 tdasilva joined #gluster
11:09 rjoseph joined #gluster
11:10 spandit joined #gluster
11:10 nishanth joined #gluster
11:12 DV joined #gluster
11:13 kkeithley joined #gluster
11:18 LebedevRI joined #gluster
11:19 mojibake joined #gluster
11:31 nbalachandran joined #gluster
11:38 nbalachandran joined #gluster
11:40 mariusp joined #gluster
11:43 nshaikh joined #gluster
11:52 RaSTar joined #gluster
12:02 soumya_ joined #gluster
12:05 sputnik13 joined #gluster
12:12 sputnik13 joined #gluster
12:18 B21956 joined #gluster
12:21 Slashman joined #gluster
12:23 B21956 joined #gluster
12:26 spandit joined #gluster
12:29 edward joined #gluster
12:31 rjoseph joined #gluster
12:34 hagarth joined #gluster
12:36 plarsen joined #gluster
12:45 LHinson joined #gluster
12:48 bennyturns joined #gluster
12:52 julim joined #gluster
12:53 jmarley joined #gluster
12:56 kanagaraj joined #gluster
12:56 diegows joined #gluster
13:00 glusterbot New news from newglusterbugs: [Bug 1131502] Fuse mounting of a tcp,rdma volume with rdma as transport type always mounts as tcp without any fail <https://bugzilla.redhat.co​m/show_bug.cgi?id=1131502>
13:03 elico joined #gluster
13:12 shubhendu joined #gluster
13:13 deeville joined #gluster
13:19 mkzero joined #gluster
13:21 bala joined #gluster
13:25 chirino joined #gluster
13:35 RaSTar joined #gluster
13:35 clutchk joined #gluster
13:38 tdasilva joined #gluster
13:42 deeville Could you give me advice on which volume type would give me the fastest writes possible over a 2-node glusterfs setup? Distributed?
13:42 deeville Thanks
13:45 toordog_wrk deeville distributed yes
13:45 bala joined #gluster
13:45 toordog_wrk but your speed will depend on other factor like speed of network and so on
13:46 rafi1 joined #gluster
13:48 deeville toordog_wrk, thanks for the reply. ya I just want the network to be saturated, 10GbE in this case.
13:56 toordog_wrk your disk will be your slow object in the chain
13:56 toordog_wrk deeville if you are using a volume to host many small file like for a web server, you should consider to mount the volume on the client via NFS
13:56 toordog_wrk http://gluster.org/community/documenta​tion/index.php/GlusterFS_Technical_FAQ, look at the last question in the FAQ
14:00 hchiramm joined #gluster
14:05 dblack joined #gluster
14:06 ndarshan joined #gluster
14:09 mariusp joined #gluster
14:10 wushudoin| joined #gluster
14:12 ramteid joined #gluster
14:16 jobewan joined #gluster
14:25 bennyturns joined #gluster
14:25 failshell joined #gluster
14:25 ThatGraemeGuy_ joined #gluster
14:25 bennyturns joined #gluster
14:34 jbrooks joined #gluster
14:35 jbrooks joined #gluster
14:36 _NiC joined #gluster
14:42 oxae joined #gluster
14:43 R0ok_ joined #gluster
14:46 necrogami joined #gluster
14:59 ekuric joined #gluster
15:01 glusterbot New news from newglusterbugs: [Bug 1136810] Inode leaks upon repeated touch/rm of files in a replicated volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1136810>
15:05 _Bryan_ joined #gluster
15:25 lalatenduM joined #gluster
15:26 jdarcy joined #gluster
15:26 chirino joined #gluster
15:28 jdarcy joined #gluster
15:28 soumya_ joined #gluster
15:29 kmai007 joined #gluster
15:30 kmai007 guys what does it mean if I run a "gluster volume heal <vol> info and my output is like so.  http://fpaste.org/133685/07950271/
15:30 glusterbot Title: #133685 Fedora Project Pastebin (at fpaste.org)
15:30 kmai007 while info split-brain doesn't say anything is "wrong"
15:31 kmai007 http://fpaste.org/133687/79504414/
15:31 glusterbot Title: #133687 Fedora Project Pastebin (at fpaste.org)
15:33 hchiramm joined #gluster
15:39 JayJ does anyone have details behind the performance io-cache setting?
15:40 JayJ like what exactly I would gain from a performance standpoint and would it be a good idea to set this on a server acting as both hypervisor and Gluster node
15:43 sputnik13 joined #gluster
15:44 MickaTri who are you glusterbot ?
15:44 bene2 joined #gluster
15:46 MickaTri glusterbot : are you a bot ?
15:46 ekuric1 joined #gluster
15:57 * glusterbot is not a bot
15:57 * glusterbot is THE BOT
15:58 skippy I, for one, welcome our new robot overlord.
15:59 mojibake JayJ: If you have lots of volumes on your setup. 32mbs default could eat up your RAM quick.
16:01 glusterbot New news from newglusterbugs: [Bug 1125134] Not able to start glusterd <https://bugzilla.redhat.co​m/show_bug.cgi?id=1125134> || [Bug 1138897] NetBSD port <https://bugzilla.redhat.co​m/show_bug.cgi?id=1138897>
16:01 JayJ i only have 1 volume, 4 bricks per, 30 nodes
16:03 JayJ each of these hypervisors are 2x westmere @ 96GB ram
16:04 mojibake JayJ: Then it looks like you can increase.. But you should test if it makes any difference.
16:05 JayJ what should I increase it to - factors of 32?  I have one guy who initially implemented a basic deployment about a year or 2 ago for something else say that 1gb would be a good idea
16:09 JayJ openstack typically hits the cpu threshold i have before it runs out of memory
16:11 PeterA joined #gluster
16:14 semiosis hagarth: JoeJulian: JustinClift: icymi, freenode recommends changing your password due to a recent breach
16:15 semiosis please let avati know as well since he's the owner of this channel
16:23 jbrooks joined #gluster
16:24 hagarth semiosis: done, thanks! will also let avati know about this.
16:37 justyns joined #gluster
16:38 justyns joined #gluster
16:38 justyns joined #gluster
16:44 sijis semiosis: on last weeks talk.. you thought maybe adding 'noatime' and 'nodirateime' may help with slow tar'ing of directory, right?
16:44 semiosis sounds like me
16:45 sijis ok. was trying to look at backlog and couldn't find the exact message
16:47 nullck joined #gluster
16:49 sijis semiosis: does there happen to be any bug filed on slow tar of many small files anywhere?
16:49 semiosis idk
16:51 sijis do you know much about 'direct-io-mode'?
16:56 failshel_ joined #gluster
16:58 semiosis @direct-io
16:58 glusterbot semiosis: Bug 2173 prevents disabling direct-io-mode from a mount command (it's disabled by default). You can enable direct-io by mounting using the glusterfs command, ie. glusterfs --direct-io-mode=enable <etc...>
16:58 semiosis that's all I know
16:58 semiosis and probably very outdated
16:59 sijis @noatime
16:59 failshel_ joined #gluster
16:59 semiosis nope
16:59 semiosis @o-direct
16:59 semiosis @o_direct
16:59 glusterbot semiosis: The problem is, fuse doesn't support direct io mode prior to the 3.4 kernel. For applications which insist on using direct-io, see this workaround: https://github.com/avati/liboindirect
16:59 semiosis a ha!
17:00 semiosis that's old too
17:00 semiosis idk what the current state of direct io is
17:03 LHinson joined #gluster
17:12 dtrainor joined #gluster
17:16 Peanut joined #gluster
17:17 failshell joined #gluster
17:24 deeville toordog_wrk, thanks again! Good to know NFS is better for many small file reads
17:31 kmai007 has any gluster specialist got a chance to see my heal question
17:34 nage joined #gluster
17:42 chirino_m joined #gluster
17:45 diegows joined #gluster
17:53 JayJ so i changed the io-cache value to 1gb and am still running VMs as expected - where would the caching occur? server side or client side?  I set it to the volume
17:53 JayJ I'm going to try and push out a large # of VMs to see if I can load down the 20g link between all the hosts and stuff
17:58 sijis if the client mounts a volume using nfs, instead of the glusterfs, does it still support if that host dies or volume goes offline?
17:59 sijis like it does with the native client, or will i have to do manually update the mount to point to the other gluster server
18:09 y4m4 joined #gluster
18:11 nshaikh joined #gluster
18:17 dtrainor joined #gluster
18:30 ThatGraemeGuy joined #gluster
18:42 side_con1rol joined #gluster
18:46 ekuric joined #gluster
18:49 chirino joined #gluster
19:01 side_control joined #gluster
19:23 oxae joined #gluster
19:46 plarsen joined #gluster
19:50 oxae_ joined #gluster
19:50 Pupeno_ joined #gluster
20:02 glusterbot New news from newglusterbugs: [Bug 1141940] Mount -t glusterfs never completes and all file-system commands hang <https://bugzilla.redhat.co​m/show_bug.cgi?id=1141940>
20:04 sputnik13 joined #gluster
20:04 dmyers semiosis: do you know if it makes sense to add more than one brick on a node to a gluster cluster? would it help for performance or be worse? i cant figure that out
20:04 semiosis it can make sense, depending on your use case
20:04 semiosis if your files are small relative to brick size then many bricks is a good thing imho
20:04 dmyers oh wow
20:05 dmyers that's actually my use case
20:05 semiosis faster to sync up a brick
20:05 semiosis if you have to replace one
20:05 dmyers i just though it adds more failure points on a single node but i think i will try that
20:07 sputnik13 joined #gluster
20:08 tom[] joined #gluster
20:11 sputnik13 joined #gluster
20:20 Pupeno joined #gluster
20:28 PeterA joined #gluster
20:30 LHinson joined #gluster
20:31 Pupeno joined #gluster
20:37 zerick joined #gluster
20:42 Pupeno joined #gluster
20:49 Pupeno_ joined #gluster
20:49 oxae joined #gluster
20:55 deeville joined #gluster
20:56 sputnik13 joined #gluster
20:58 deeville joined #gluster
21:04 Jamoflaw joined #gluster
21:05 Pupeno joined #gluster
21:26 failshel_ joined #gluster
21:35 Pupeno joined #gluster
21:42 elico joined #gluster
21:43 DV joined #gluster
21:46 Pupeno joined #gluster
21:52 PeterA joined #gluster
21:54 georgeh joined #gluster
21:57 Pupeno joined #gluster
22:06 Pupeno_ joined #gluster
22:09 Pupeno joined #gluster
22:23 uebera|| joined #gluster
22:24 capri joined #gluster
22:37 Pupeno joined #gluster
22:41 Pupeno joined #gluster
23:04 Pupeno_ joined #gluster
23:06 Pupeno joined #gluster
23:08 Freman joined #gluster
23:08 Freman Greetings
23:09 Freman Does gluster re-distribute a file that's overwritten with exactly the same content? or does it just redistribute the updated timestamps?
23:14 tiglog joined #gluster
23:20 JoeJulian I assume you're asking if you write over a file with the same contents, are those contents re-written to disk. Yes they are.
23:21 Freman are they re-distributed though?
23:21 Freman across all nodes (assuming replication setup)
23:21 JoeJulian That's not how distribute works.
23:22 JoeJulian @lucky dht
23:22 glusterbot JoeJulian: http://en.wikipedia.org/wiki/Dihydrotestosterone
23:22 JoeJulian gah
23:22 JoeJulian @lucky dht distribute hash
23:22 glusterbot JoeJulian: http://en.wikipedia.org/wi​ki/Distributed_hash_table
23:22 JoeJulian ^
23:23 * Freman confesses to not knowing how gluster works beyond setting up 3 'bricks' and running a handful of commands to get them to share content :D
23:23 JoeJulian You're looking at it backwards.
23:24 Freman probably - I've been asked to use it to push content out to a bunch of machines, the script that creates this content just overwrites the files over and over
23:24 JoeJulian Bricks are storage for GlusterFS (only). GlusterFS is your datastore for your services.
23:25 Freman I'm suspicious that this isn't the best way
23:43 gluslog_ joined #gluster
23:47 DV_ joined #gluster
23:48 mkzero_ joined #gluster
23:48 tru_tru_ joined #gluster
23:49 nullck_ joined #gluster
23:50 ghenry_ joined #gluster
23:50 Peanut___ joined #gluster
23:52 tom][ joined #gluster
23:53 uebera|| joined #gluster
23:53 clutchk joined #gluster
23:54 gmcwhistler joined #gluster
23:55 diegows joined #gluster
23:55 zerick joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary