Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 harish joined #gluster
00:10 _pol joined #gluster
00:11 tdasilva joined #gluster
00:13 plarsen joined #gluster
00:13 harish joined #gluster
00:20 TrDS left #gluster
00:20 harish joined #gluster
00:21 _Bryan_ joined #gluster
00:42 jaank joined #gluster
00:57 jaank joined #gluster
01:16 bala joined #gluster
01:17 lalatenduM joined #gluster
01:28 calisto joined #gluster
01:29 MacWinner joined #gluster
01:34 harish joined #gluster
01:40 deniszh joined #gluster
01:42 deniszh1 joined #gluster
02:04 haomaiwang joined #gluster
02:09 calisto joined #gluster
02:26 jaank joined #gluster
02:38 soumya joined #gluster
02:54 hagarth joined #gluster
02:55 bala joined #gluster
03:19 bala joined #gluster
03:41 kshlm joined #gluster
03:41 bharata-rao joined #gluster
03:45 atinmu joined #gluster
03:53 meghanam_ joined #gluster
03:53 meghanam joined #gluster
04:03 pdrakeweb joined #gluster
04:12 saurabh joined #gluster
04:13 nbalacha joined #gluster
04:16 ppai joined #gluster
04:18 nishanth joined #gluster
04:22 RameshN joined #gluster
04:25 khelll joined #gluster
04:33 ndarshan joined #gluster
04:34 sachin joined #gluster
04:35 DV joined #gluster
04:37 itisravi joined #gluster
04:39 hagarth joined #gluster
04:42 kanagaraj joined #gluster
04:47 rafi joined #gluster
04:49 rjoseph joined #gluster
04:53 anoopcs joined #gluster
04:54 shubhendu joined #gluster
05:04 bala joined #gluster
05:07 atinmu joined #gluster
05:11 kumar joined #gluster
05:12 anoopcs joined #gluster
05:14 meghanam joined #gluster
05:14 prasanth_ joined #gluster
05:15 nbalacha joined #gluster
05:16 glusterbot News from newglusterbugs: [Bug 1173414] glusterd: remote locking failure when multiple synctask transactions are run <https://bugzilla.redhat.com/show_bug.cgi?id=1173414>
05:16 lalatenduM joined #gluster
05:19 zerick joined #gluster
05:22 anoopcs joined #gluster
05:27 jiffin joined #gluster
05:31 bala joined #gluster
05:32 kdhananjay joined #gluster
05:34 TvL2386 joined #gluster
05:45 atalur joined #gluster
05:45 nshaikh joined #gluster
05:45 poornimag joined #gluster
05:51 ramteid joined #gluster
05:53 atinmu joined #gluster
05:55 sahina joined #gluster
06:03 atinmu JustinClift, u thr?
06:07 soumya joined #gluster
06:11 edong23 joined #gluster
06:12 raghu` joined #gluster
06:15 badone joined #gluster
06:24 jvandewege joined #gluster
06:30 bala joined #gluster
06:31 badone joined #gluster
06:37 itisravi_ joined #gluster
06:44 zerick joined #gluster
06:45 atinmu joined #gluster
06:48 anil joined #gluster
06:53 atalur joined #gluster
07:00 lalatenduM joined #gluster
07:13 nbalacha joined #gluster
07:17 jtux joined #gluster
07:17 soumya joined #gluster
07:18 XpineX joined #gluster
07:22 ctria joined #gluster
07:46 rgustafs joined #gluster
07:47 glusterbot News from newglusterbugs: [Bug 1173437] [RFE] changes needed in snapshot info command's xml output. <https://bugzilla.redhat.com/show_bug.cgi?id=1173437>
07:53 [Enrico] joined #gluster
08:00 karnan joined #gluster
08:06 mator_ joined #gluster
08:06 marcoceppi_ joined #gluster
08:07 edong23_ joined #gluster
08:07 hybrid5121 joined #gluster
08:07 anil_ joined #gluster
08:08 tvb joined #gluster
08:08 tvb hi guys
08:08 J^Man joined #gluster
08:08 tvb when I run a `du` on two different bricks the sizes don't match up, they are 20Gb apart
08:09 tvb I have gluster running in Replicate type
08:09 tvb how can 'sync' those two bricks again?
08:14 cfeller_ joined #gluster
08:17 glusterbot News from newglusterbugs: [Bug 922801] Gluster not resolving hosts with IPv6 only lookups <https://bugzilla.redhat.com/show_bug.cgi?id=922801>
08:19 nshaikh joined #gluster
08:20 kumar joined #gluster
08:21 nshaikh joined #gluster
08:22 ghenry joined #gluster
08:22 ghenry joined #gluster
08:23 karnan joined #gluster
08:25 fsimonce joined #gluster
08:27 ricky-ticky joined #gluster
08:29 Philambdo joined #gluster
08:32 liquidat joined #gluster
08:33 atalur joined #gluster
08:37 karnan joined #gluster
08:38 kaushal_ joined #gluster
08:38 mbukatov joined #gluster
08:45 shubhendu joined #gluster
08:47 prasanth_ joined #gluster
08:53 atinmu joined #gluster
08:57 rafi1 joined #gluster
09:00 SOLDIERz joined #gluster
09:01 overclk joined #gluster
09:03 badone joined #gluster
09:24 SOLDIERz joined #gluster
09:28 mator does remove brick migrates data? since manual page for 3.5.3 tells that data will be unavailable and http://gluster.org/pipermail/gluster-users/2012-October/011502.html otherwise
09:34 Slashman joined #gluster
09:35 vimal joined #gluster
09:38 kovshenin joined #gluster
09:41 Dw_Sn joined #gluster
09:41 atinmu joined #gluster
09:43 shubhendu joined #gluster
09:47 kshlm joined #gluster
09:48 karnan joined #gluster
09:52 elico joined #gluster
09:54 jvandewege_ joined #gluster
10:00 Pupeno joined #gluster
10:01 rafi joined #gluster
10:04 hagarth joined #gluster
10:07 marbu joined #gluster
10:10 MatteusBlanc fg
10:10 MatteusBlanc dfg
10:11 MatteusBlanc saf
10:12 MatteusBlanc d
10:16 karnan joined #gluster
10:21 anoopcs joined #gluster
10:21 smohan joined #gluster
10:27 smohan_ joined #gluster
10:30 anoopcs joined #gluster
10:44 badone joined #gluster
10:46 ndarshan joined #gluster
10:47 glusterbot News from newglusterbugs: [Bug 1173513] mount.glusterfs fails to check return of mount command. <https://bugzilla.redhat.com/show_bug.cgi?id=1173513>
10:47 glusterbot News from newglusterbugs: [Bug 1173515] mount.glusterfs fails to check return of mount command. <https://bugzilla.redhat.com/show_bug.cgi?id=1173515>
10:49 karnan joined #gluster
10:54 LebedevRI joined #gluster
10:56 atalur joined #gluster
11:01 Dw_Sn joined #gluster
11:09 fsimonce joined #gluster
11:10 diegows joined #gluster
11:12 ricky-ti1 joined #gluster
11:14 badone joined #gluster
11:14 karnan joined #gluster
11:30 ninkotech_ joined #gluster
11:30 ninkotech joined #gluster
11:34 mator joined #gluster
11:42 sage_ joined #gluster
11:46 feeshon joined #gluster
11:47 deniszh joined #gluster
11:49 badone joined #gluster
11:51 SOLDIERz joined #gluster
11:55 anoopcs joined #gluster
11:58 soumya joined #gluster
12:02 kkeithley joined #gluster
12:05 deniszh1 joined #gluster
12:09 calisto joined #gluster
12:12 deniszh1 left #gluster
12:16 rolfb joined #gluster
12:22 deniszh joined #gluster
12:26 hagarth joined #gluster
12:44 bala joined #gluster
12:48 glusterbot News from newglusterbugs: [Bug 1140818] symlink changes to directory, that reappears on removal <https://bugzilla.redhat.com/show_bug.cgi?id=1140818>
12:48 glusterbot News from newglusterbugs: [Bug 847821] After disabling NFS the message "0-transport: disconnecting now" keeps appearing in the logs <https://bugzilla.redhat.com/show_bug.cgi?id=847821>
12:56 chirino joined #gluster
12:56 pcaruana joined #gluster
13:05 Slashman joined #gluster
13:05 mator I've commented on this NFS bug
13:10 smohan joined #gluster
13:19 edward1 joined #gluster
13:24 kkeithley joined #gluster
13:32 dastar joined #gluster
13:33 B21956 joined #gluster
13:37 pdrakeweb joined #gluster
13:41 deniszh joined #gluster
13:45 rgustafs joined #gluster
13:48 nbalacha joined #gluster
13:53 _br_ joined #gluster
13:55 Slashman joined #gluster
13:56 rgustafs joined #gluster
13:59 Slashman joined #gluster
14:02 deniszh1 joined #gluster
14:02 bene joined #gluster
14:04 tdasilva joined #gluster
14:07 virusuy joined #gluster
14:07 virusuy joined #gluster
14:09 _br_ Hi there, was currently reading up on different cluster type file systems. Is there any good ressource for this? E.g. when to use gluster, when to use lustrefs etc?
14:13 deepakcs joined #gluster
14:14 _br_ (also cross posted on #lustre)
14:16 pkoro joined #gluster
14:25 plarsen joined #gluster
14:39 bennyturns joined #gluster
14:41 bennyturns joined #gluster
14:45 mator br depends on what your application require
14:50 * m0zes would suggest using lustre when you can afford to lose at least 1 fte for the care and feeding of it. (and if you don't need things like replication.) ;)
14:52 _br_ Hm I see
14:52 _br_ Currently I'm just doing heavy RTFM and trying to wrap my head around these kind of systems
14:53 _br_ It seems lustrefs works on striping, so, what happens if a node goes offline? I can't access the data anymore, ie. its not replicated?
14:53 _br_ Is that a better use case then for gluster?
14:53 _br_ mator: m0zes: thanks btw. for the comments, really appreciate it :)
14:59 m0zes I like gluster because it is relatively simple to manage, performance is decent, and I know that even if the software completely falls to pieces, and the storage nodes die, that I can still get the data off the individual bricks.
15:00 _br_ m0zes: makes alot of sense, agree
15:00 _br_ how does gluster compare to ceph? Its another variable on my radar now, and I'm getting even more confused
15:03 m0zes ceph is an interesting one. its design is slightly similar to that of lustre. in my opinion is is poised to be a "lustre-on-steroids" piece of software.
15:03 m0zes data is split into chunks across your underlying storage, can be replicated, is location aware and has mountable filesystems for practically every os.
15:05 m0zes currently only supports 1 metadata server at a time, so that gives you a single point of failure, but I know that is being actively worked on.
15:07 m0zes at least in the HPC world, you would probably end up using a combination of these types of filesystems. you could use something highly redundant, such as glusterfs or ceph, for your homedirectories. you would use something like lustre for *huge* high-speed temp space shared cluster-wide.
15:07 wushudoin joined #gluster
15:09 skippy this continues to be a problem for me: http://supercolony.gluster.org/pipermail/gluster-users/2014-December/019727.html
15:09 _br_ huh... very very helpful m0zes, thank you for elaborating on that
15:10 skippy as well as this: http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019621.html
15:10 skippy if anyone has any pointers on locking or sockets, I'd appreciate it.
15:11 m0zes it all depends on what you need out of a filesystem. there are quite a few parallel filesystems in this space now, my only suggestion is to do what you are doing, research. look for pitfalls, put together some small testcases of your workload, setup some test instances of the filesystems and play.
15:11 _br_ m0zes: that makes alot of sense... rather testing and practical instead of trying to see if from a pure theoretical viewpoint...
15:11 _br_ m0zes: going further down the rtfm rabbit hole on this... :(
15:12 m0zes yep. if you can, simulute node failures and see how the software behaves.
15:12 m0zes they all have different goals, so a full featured analysis that would be useful for everyone is next to impossible.
15:16 sac_ joined #gluster
15:16 wushudoin joined #gluster
15:17 _br_ yeah I after reading a bit in this field I got the feeling of that as well
15:17 _br_ would be a fun challege though... at least to over this in a series of blog articles
15:18 dgandhi joined #gluster
15:18 kovshenin joined #gluster
15:20 deniszh joined #gluster
15:23 Pupeno joined #gluster
15:26 Dw_Sn joined #gluster
15:30 hchiramm_ joined #gluster
15:49 jobewan joined #gluster
15:56 tvb guys I need help
15:56 tvb How can I get my two bricks back in sync?
15:57 tvb they are different sizes
16:05 _pol joined #gluster
16:06 _pol joined #gluster
16:06 lmickh joined #gluster
16:07 tvb ok, do I need to see output when running "gluster volume heal <volume name> full"
16:08 tvb `gluster volume heal <volume name> info` is taking a long time to run ..
16:12 tvb hmm im seeing a lot of  remote operation failed: No such file or directory. messages in glustershd.log
16:12 tvb problem?
16:15 n-st joined #gluster
16:19 Pupeno joined #gluster
16:21 maveric_amitc_ joined #gluster
16:31 bennyturns joined #gluster
16:33 hagarth joined #gluster
16:34 feeshon joined #gluster
16:34 Maitre left #gluster
16:36 squizzi joined #gluster
16:41 khelll joined #gluster
16:48 haomaiwa_ joined #gluster
16:50 hagarth joined #gluster
16:52 vimal joined #gluster
17:03 _Bryan_ joined #gluster
17:07 theron joined #gluster
17:21 rotbeard joined #gluster
17:31 hagarth joined #gluster
17:32 fandi joined #gluster
17:40 TrDS joined #gluster
17:45 shubhendu joined #gluster
17:49 glusterbot News from newglusterbugs: [Bug 1161893] volume no longer available after update to 3.6.1 <https://bugzilla.redhat.com/show_bug.cgi?id=1161893>
17:49 soumya joined #gluster
17:51 rjoseph joined #gluster
17:53 Dw_Sn joined #gluster
18:06 PeterA joined #gluster
18:07 theron joined #gluster
18:32 calisto joined #gluster
18:39 Pupeno_ joined #gluster
18:43 edward1 joined #gluster
18:45 lalatenduM joined #gluster
19:08 maveric_amitc_ joined #gluster
19:12 maveric_amitc_ joined #gluster
19:19 glusterbot News from newglusterbugs: [Bug 1173725] Glusterd fails when script set_geo_rep_pem_keys.sh is executed on peer <https://bugzilla.redhat.com/show_bug.cgi?id=1173725>
19:40 plarsen joined #gluster
19:42 tdasilva joined #gluster
19:49 glusterbot News from newglusterbugs: [Bug 1173732] Glusterd fails when script set_geo_rep_pem_keys.sh is executed on peer <https://bugzilla.redhat.com/show_bug.cgi?id=1173732>
19:49 glusterbot News from resolvedglusterbugs: [Bug 1173725] Glusterd fails when script set_geo_rep_pem_keys.sh is executed on peer <https://bugzilla.redhat.com/show_bug.cgi?id=1173725>
20:03 calisto joined #gluster
20:08 coredump joined #gluster
20:28 tdasilva joined #gluster
20:28 squizzi joined #gluster
20:31 MacWinner joined #gluster
20:46 free_amitc_ joined #gluster
21:02 zerick joined #gluster
21:03 zerick joined #gluster
21:06 badone joined #gluster
21:08 elico joined #gluster
21:31 pdrakeweb joined #gluster
21:37 kovshenin joined #gluster
21:46 maveric_amitc_ joined #gluster
22:01 blkperl joined #gluster
22:01 blkperl hi, im trying to delete a file and im getting an input/output error
22:02 blkperl gluster 3.4
22:06 m0ellemeister joined #gluster
22:20 pdrakeweb joined #gluster
22:36 JoeJulian blkperl: Check the client log and/or "gluster volume heal $volname info split-brain
22:36 plarsen joined #gluster
22:46 free_amitc_ joined #gluster
22:50 badone joined #gluster
22:57 deniszh1 joined #gluster
22:59 theron joined #gluster
23:11 jaank joined #gluster
23:13 kovshenin joined #gluster
23:14 blkperl JoeJulian: thanks, checking on those
23:16 maveric_amitc_ joined #gluster
23:16 M28 joined #gluster
23:16 M28 what directory should I be putting my gluster bricks?
23:17 M28 it complains if I try to put it in /srv/
23:18 Pupeno joined #gluster
23:20 blkperl JoeJulian: the log says heal failed, the command says heal success
23:22 blkperl JoeJulian: vW [afr-common.c:1505:afr_conflicting_iattrs] 0-ded-demo01-replicate-0: /home/demo/.gitconfig: gfid differs on subvolume 1
23:23 JoeJulian blkperl: You're not using the bricks directly, are you?
23:23 blkperl no
23:23 JoeJulian wierd, I haven't seen that error in a long time.
23:24 JoeJulian You'll have to treat it like split-brain. Pick on the be good and remove the other from the brick.
23:24 JoeJulian M28: more info please
23:24 M28 uh
23:25 M28 basically I wanted to put it in /srv/brick1 and /srv/brick2
23:25 M28 it told me it can't put it on the root partition
23:26 JoeJulian Ah, ok. It /can/ but that's not normally what you would want to do. To override that use the keyword "force" at the end of the cli command.
23:29 M28 yeah I ended up doing that
23:29 M28 but what's the recommended place to put the bricks, then?
23:29 partner hmm, would glusterfs be upset if i (ab)used its functionality on min-disk-free by writing a large file to the mountpoint root to prevent further writes to that brick?
23:31 partner large being large enough to make it go under the threshold ie. instruct it to write "elsewhere"
23:31 JoeJulian Bricks would normally be their own filesystem. Typically you would be taking your large block storage device(s), putting a filesystem on it, then mounting it somewhere (eg /srv/brick1) and using a subdirectory on that brick for the brick root (eg /srv/brick1/myvol)
23:32 partner goal being, i have dozen of boxes, oldest ones are full and under the threshold while newest with free space getting most of the writes
23:32 JoeJulian partner: That's one way to do it, yes.
23:33 JoeJulian As long as you don't want your old files to grow.
23:33 partner thought so too.. i could ship the full boxes away with oldest data, least accessed
23:33 blkperl JoeJulian++ thanks its all better now
23:33 partner never modified, write once, only deletions are possible
23:33 glusterbot blkperl: JoeJulian's karma is now 17
23:35 partner of course i would need to play it another way around aswell, lower the limit to allow writes to old nodes and at the same time make the newer boxes go even lower..
23:45 M28 JoeJulian, got it, thanks
23:50 tessier_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary