Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 krullie joined #gluster
00:03 coredump joined #gluster
00:08 gomikemike joined #gluster
00:11 JoeJulian That's what it is, yes.
00:11 JoeJulian posix
00:22 krullie joined #gluster
00:23 plarsen joined #gluster
00:35 telmich joined #gluster
00:35 telmich good day
00:35 telmich I am trying to test gluster, but a lot of links are dead
00:35 telmich (from: http://www.gluster.org/documentation/Getting_started_common_criteria/)
00:36 telmich I was wondering, is the project dead?
00:45 JoeJulian telmich: hah, far from it.
00:45 JoeJulian Well that's lame. Missing trailing / shouldn't cause a 404...
00:55 telmich JoeJulian: alright... I will give it a try
00:55 JoeJulian I know the web site is being actively worked on. It may be a temporary problem.
00:55 telmich I am currently looking at ceph/gluster, because that is what openenbula supports... we do have a sheepdog cluster running, too, but opennebula does not (yet) support that
00:58 JoeJulian Huh, I've never heard of that one.
00:59 telmich which one?
00:59 JoeJulian sheepdog
00:59 telmich ahh, the idea is pretty cool and it is specifically designed for qemu
01:00 telmich so far gluster looks like much less pain in the... than ceph
01:01 telmich what is the default behaviour of gluster, when there is replica configuration of 2 and one node failed?
01:01 JoeJulian Yes, and everyone I talked to at LISA who had compared the two said that GlusterFS was much faster.
01:02 telmich was lisa this year already?
01:02 JoeJulian It will keep running, though if it's an unexpected failure, there is a (default of 42 second) timeout while it waits for the brick to return.
01:02 JoeJulian It was last week.
01:02 telmich (I decided to skip anyway, but hey...)
01:02 JoeJulian It was here in Seattle so I had to at least put in an appearance.
01:03 telmich well, it's a bit further from switzerland
01:03 telmich speed is actually an interesting point
01:03 telmich design wise, if I have multiple disks in a server, is it recommended to create bricks on each or to use a raid below?
01:04 JoeJulian depends on use case. I've generally done a brick-per-disk and used gluster for fault tolerance.
01:05 telmich is there a setting/concept of cache/keeping stuff local?
01:05 telmich the background for the question is: I plan to use either ceph or gluster for storing vm images
01:06 telmich and given that we initially go with 3 vm hosts and planned one copy, the disk could in theory be at the "wrong" host
01:06 JoeJulian not really. Especially when it comes to cloud engineering, storage is usually kept separate from compute.
01:07 JoeJulian Sorry, I've got to run. It's 5:00 and I want to get a haircut before we go out to dinner.
01:08 telmich JoeJulian: thank you so far - enjoy your dinner
01:22 DV joined #gluster
01:23 _Bryan_ joined #gluster
01:27 msmith_ joined #gluster
01:42 hamcube joined #gluster
01:55 diegows joined #gluster
01:58 diegows joined #gluster
02:17 topshare joined #gluster
02:32 Telsin joined #gluster
02:32 al joined #gluster
02:32 fyxim_ joined #gluster
02:32 elyograg joined #gluster
02:32 toti joined #gluster
02:32 ultrabizweb joined #gluster
02:32 cyberbootje joined #gluster
02:32 tomased joined #gluster
02:32 tdasilva joined #gluster
02:32 schrodinger_ joined #gluster
02:32 johnnytran joined #gluster
02:32 masterzen joined #gluster
02:32 wushudoin joined #gluster
02:32 coredump joined #gluster
02:32 krullie joined #gluster
02:32 _Bryan_ joined #gluster
02:32 feeshon joined #gluster
02:32 mator joined #gluster
02:32 dblack joined #gluster
02:32 [o__o] joined #gluster
02:32 ildefonso joined #gluster
02:32 ghenry joined #gluster
02:32 jaroug joined #gluster
02:32 hybrid512 joined #gluster
02:32 wgao joined #gluster
02:32 bjornar joined #gluster
02:32 RobertLaptop joined #gluster
02:32 eclectic joined #gluster
02:32 Intensity joined #gluster
02:41 _Bryan_ joined #gluster
02:41 feeshon joined #gluster
02:41 mator joined #gluster
02:41 dblack joined #gluster
02:41 [o__o] joined #gluster
02:46 ildefonso joined #gluster
02:46 ghenry joined #gluster
02:46 jaroug joined #gluster
02:46 hybrid512 joined #gluster
02:46 wgao joined #gluster
02:46 bjornar joined #gluster
02:46 RobertLaptop joined #gluster
02:46 eclectic joined #gluster
02:46 Intensity joined #gluster
02:46 samsaffron___ joined #gluster
02:46 eshy joined #gluster
03:15 msmith_ joined #gluster
03:56 bala joined #gluster
04:07 calisto joined #gluster
04:08 hagarth joined #gluster
04:11 kalli joined #gluster
04:19 saltsa joined #gluster
04:50 vimal joined #gluster
04:59 RameshN joined #gluster
05:04 msmith_ joined #gluster
05:05 harish joined #gluster
05:27 lyang0 joined #gluster
05:45 anoopcs joined #gluster
06:01 neofob joined #gluster
06:44 bala joined #gluster
06:53 msmith_ joined #gluster
07:00 anoopcs joined #gluster
07:13 maveric_amitc_ joined #gluster
07:48 LebedevRI joined #gluster
08:33 ekuric joined #gluster
08:42 msmith_ joined #gluster
09:03 soumya_ joined #gluster
09:32 DV joined #gluster
10:31 msmith_ joined #gluster
10:43 telmich good morning
10:44 telmich when writing to data on my test gluster cluster, I very soon get the message dd: writing `/mnt/gluster/test2': Transport endpoint is not connected
10:44 telmich (reproducible)
10:44 telmich anyone an idea what may be configure wrong?
11:10 krullie joined #gluster
11:10 coredump joined #gluster
11:10 wushudoin joined #gluster
11:10 masterzen joined #gluster
11:10 johnnytran joined #gluster
11:10 schrodinger joined #gluster
11:10 tdasilva joined #gluster
11:10 tomased joined #gluster
11:10 cyberbootje joined #gluster
11:10 ultrabizweb joined #gluster
11:10 toti joined #gluster
11:10 elyograg joined #gluster
11:10 fyxim_ joined #gluster
11:10 al joined #gluster
11:10 Telsin joined #gluster
11:13 ghenry joined #gluster
11:13 hybrid512 joined #gluster
11:13 wgao joined #gluster
11:13 bjornar joined #gluster
11:13 RobertLaptop joined #gluster
11:13 eclectic joined #gluster
11:13 Intensity joined #gluster
11:34 kovshenin joined #gluster
11:35 kovshenin hey folks, I'm using 3.6 and trying to to mount a gluster volume from a .vol file, but it looks like it's not reading the file at all, just gives me "Server name/volume name unspecified cannot proceed further..". any tips on where to look?
11:51 firemanxbr joined #gluster
11:59 diegows joined #gluster
12:19 msmith_ joined #gluster
12:38 rafi1 joined #gluster
12:45 masterzen joined #gluster
13:06 anoopcs joined #gluster
13:14 hamcube joined #gluster
13:30 calisto joined #gluster
13:44 kovshenin joined #gluster
14:02 buybran joined #gluster
14:03 vimal joined #gluster
14:03 buybran hi all, I have a question: is gluster a feasable solution for building a low-cost storage cluster on raspberry pi like systems using large hard drives?
14:07 mrEriksson Well, gluster can be a bit cpu intensive when load increases
14:08 msmith_ joined #gluster
14:23 marvinc joined #gluster
14:28 SOLDIERz joined #gluster
14:38 vimal joined #gluster
14:38 soumya_ joined #gluster
15:24 delhage joined #gluster
15:24 delhage joined #gluster
15:30 drankis joined #gluster
15:30 drankis Hello all! Maybe someone know, is CentOS 7 gluster client package working correctly? I am not able to mount volume... :/
15:30 vimal joined #gluster
15:31 drankis Error in /var/log/glusterfs/mnt.log: [xlator.c:425:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again
15:38 DV joined #gluster
15:42 buybran mrEriksson: what is "increased load" in your estimation?
15:44 buybran to be clear, I am looking to use it at home, so there probably won't be much load
15:49 mrEriksson buybran: Well, I don't actually know at what point it would be a problem. But I recently moved a setup from local storage to a two-brick replicated storage and that resulted in load increase
15:50 buybran okay, I guess I'd need to test that out then
15:50 mrEriksson Should be noted though that this setup was filling the 1Gbps connection with reads of 4-8MB files, so it might be a bit extreme
15:51 mrEriksson Gluster did however consume most cpu time on the cores it was using
15:51 buybran have you any experience when it comes to recovery? I read that Ceph for example will have troubles in recovery when the storage node has low memory
15:51 mrEriksson No, not really, but haven't really had any major disasters yet either :-)
15:56 buybran hehe, good for you :)
15:56 mrEriksson Yeah :)
15:57 msmith joined #gluster
16:01 hagarth joined #gluster
16:04 elico joined #gluster
16:23 kovshenin hi. is there an option to limit simultaneous number of connections from a fuse client?
16:28 krullie joined #gluster
16:33 glusterbot News from newglusterbugs: [Bug 1167012] self-heal-algorithm with option "full" doesn't heal sparse files correctly <https://bugzilla.redhat.com/show_bug.cgi?id=1167012>
16:33 marvinc left #gluster
16:51 julim joined #gluster
17:00 julim joined #gluster
17:06 nshaikh joined #gluster
17:06 julim joined #gluster
17:22 JoeJulian kovshenin: There's no option for that. What is the problem a feature like that would be looking to solve?
17:23 kovshenin I was actually hoping to raise that limit but it looks like I was wrong, restarting at 32 simultaneous dd's on 32 threads I see them all write simultaneously
17:23 JoeJulian Ah, cool.
17:25 JoeJulian I could actually think of a use case for such a setting, though. A potential DoS if a rogue client mounted out-of-control... I'll have to see if I can break that.
17:29 kovshenin :)
17:30 kovshenin what would be some good tips to get better write performance? I can read at about 50mb/s but writes are only at 3 mb/s, using fuse client...
17:33 JoeJulian Don't use the fuse client, use libgfapi if possible.
17:34 JoeJulian Don't to replica 16
17:34 JoeJulian s/to/do/
17:34 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Check the logs for more informations.
17:34 JoeJulian Oh really...
17:36 JoeJulian Other than network saturation or disk performance, fuse performance problems are all about context switching, so faster ram, faster cpus, faster bus...
17:46 msmith joined #gluster
17:47 kovshenin k, will look into it, thanks a lot!
17:56 drankis joined #gluster
17:56 baojg joined #gluster
18:04 baojg joined #gluster
18:14 calisto joined #gluster
18:19 nshaikh joined #gluster
18:24 msmith_ joined #gluster
18:25 msmith_ joined #gluster
18:27 msmith_ joined #gluster
18:31 shubhendu joined #gluster
18:34 baojg joined #gluster
18:50 bennyturns joined #gluster
18:57 RicardoSSP joined #gluster
18:58 ildefonso joined #gluster
19:00 maveric_amitc_ joined #gluster
19:18 _Bryan_ joined #gluster
19:50 msmith_ joined #gluster
20:23 baojg joined #gluster
20:25 nshaikh left #gluster
20:34 rshott joined #gluster
20:52 daMaestro joined #gluster
21:07 ghenry joined #gluster
21:19 calisto joined #gluster
21:28 failshell joined #gluster
21:36 sage_ joined #gluster
21:53 baojg joined #gluster
22:00 PatNarciso joined #gluster
22:01 PatNarciso wow.  it's been nine months since I've been here.  Hows it going gang? Hi Glusterbot.
22:07 PatNarciso What's the ideal method for monitoring real-time file writes with gluster?  debug/trace translator?
22:26 haomaiwa_ joined #gluster
22:26 necrogami joined #gluster
22:29 msmith_ joined #gluster
22:52 diegows joined #gluster
23:24 baojg joined #gluster
23:43 msmith_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary