Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-10-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 cholcombe joined #gluster
00:51 hagarth joined #gluster
01:16 jkroon joined #gluster
01:35 kramdoss_ joined #gluster
01:46 aj__ joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:51 Lee1092 joined #gluster
02:37 DV_ joined #gluster
02:38 yawkat joined #gluster
02:53 blu__ joined #gluster
03:04 arpu joined #gluster
03:23 luizcpg_ joined #gluster
03:28 mybe joined #gluster
03:30 gem joined #gluster
03:49 Wizek_ joined #gluster
04:03 riyas joined #gluster
04:57 kpease joined #gluster
05:43 jiffin joined #gluster
05:45 atinm joined #gluster
06:47 mss joined #gluster
06:58 riyas joined #gluster
08:02 elastix joined #gluster
08:14 raghug joined #gluster
08:16 cholcombe joined #gluster
08:29 elastix anyone online?
08:47 ndevos joined #gluster
09:02 Philambdo joined #gluster
09:07 decay joined #gluster
09:18 jiffin joined #gluster
09:18 hchiramm__ joined #gluster
09:23 marc_888 joined #gluster
09:26 marc_888 Hello guys
09:26 marc_888 Do you know if GlusterFS 3.8 is stable now? Is a little confusing on the website :(
09:29 kblin left #gluster
09:45 pfactum marc_888: yes
10:16 marc_888 joined #gluster
10:20 armin joined #gluster
10:28 marc_888 joined #gluster
10:36 post-factum joined #gluster
10:42 mss joined #gluster
10:43 chris_ joined #gluster
10:43 chris_ can somebody help me?
10:44 chris_ im trying to setup glusterfs
10:44 chris_ i need more info
10:49 urlator joined #gluster
10:56 urlator joined #gluster
10:58 urlator joined #gluster
11:01 urlator joined #gluster
11:18 ndevos joined #gluster
11:20 marc_888 Is anyone here who deployed GlusterFS using the Ansible gluster_volume module?
11:41 kramdoss_ joined #gluster
12:10 johnmilton joined #gluster
12:33 luizcpg joined #gluster
12:41 johnmilton joined #gluster
12:54 Pupeno joined #gluster
12:57 harish joined #gluster
13:11 msvbhat joined #gluster
13:15 Philambdo joined #gluster
13:19 Pupeno joined #gluster
13:21 uebera|| joined #gluster
13:34 panina joined #gluster
13:34 panina Hello
13:34 glusterbot panina: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:36 panina I'm pondering GlusterFS vs DRBD. Are there any steps that might speed up client read/write times when accessing a GlusterFS volume?
13:36 atinm joined #gluster
13:37 panina I'm hoping to achieve server loads close to accessing NFS, for the clients. Server load on the storage servers are less important. The clients will mainly be a couple of Proxmox servers, running around 20 VMs.
13:39 panina I see a lot of options in setting up GlusterFS systems, but I wonder if there are any special settings that might speed things up. Also, I'm wondering if striping or replicating will have any effect on client read/write speeds.
14:04 kramdoss_ joined #gluster
14:06 harish joined #gluster
14:07 panina joined #gluster
14:08 harish joined #gluster
14:14 Philambdo joined #gluster
14:19 hchiramm__ joined #gluster
14:23 harish joined #gluster
14:27 Roland- joined #gluster
14:28 Roland- hi folks, just created a gluster with 4 nodes, 2 replica. But only 1 disk for now, performance is quite meh ... 40 MB/s  ish
14:28 Roland- question, if I add bricks/drives on that volume, will the performance increase?
14:35 jkroon joined #gluster
14:36 hajoucha joined #gluster
14:39 hajoucha hi, I have two fedora boxes with infiniband and glusterfs 3.8.5 (default repository) - I can mount volume over tcp but cannot do so with rdma transport. However, I have verified pure rdma connectivity and that works OK (e.g. rping) - so the problem seems to be in gluster itself. Does anyone have similar experience? I have run out of ideas ...
14:42 hajoucha there is always a "RDMA_CM_EVENT_REJECTED, error 8" in the logs....
14:43 kramdoss_ joined #gluster
14:43 aravindavk_ joined #gluster
14:44 atrius_ joined #gluster
14:55 Lee1092 joined #gluster
15:07 Philambdo joined #gluster
15:14 elastix joined #gluster
15:25 renout_away joined #gluster
15:28 moss joined #gluster
15:48 Gnomethrower joined #gluster
15:49 Roland- I am unable to exceed 30 MB/s with 4 nodes
15:49 Roland- even in replicated distributed
15:53 elastix why?
15:54 jkroon joined #gluster
15:56 Roland- no idea but I was expecting more tbh
15:56 Roland- these are indeed sas 15k drives
15:56 Roland- each can do 150 MB/s
15:58 misc then can be the network ?
15:58 Roland- 10G, x520
15:58 Roland- empty
15:59 Roland- network perf is very good, mtu 9k
15:59 Roland- is such a result expected?
16:00 Roland- or what should I expect on a 4 node distributed replicated
16:01 misc it kinda depend on the setup
16:01 misc (ie, replicated, distributed, etc)
16:01 misc but I am also surprised by your result
16:02 elastix ssd?
16:02 elastix 15k disk which OS are you using?
16:19 ndevos joined #gluster
16:19 ndevos joined #gluster
16:20 elastix joined #gluster
16:22 elastix joined #gluster
16:25 riyas joined #gluster
16:32 Roland- centos 7.2
16:32 Roland- 15k disks
16:32 Roland- 1 for each node
16:32 Roland- interconnected through 10g
16:40 elastix 10g?
16:45 Jacob843 joined #gluster
16:59 Pupeno joined #gluster
17:04 elastix joined #gluster
17:08 Roland- yes, these are 4 blades in an m1000e chassis
17:08 Roland- with intel x520 nics
17:08 Roland- and m8024-k switches
17:08 elastix ok
17:08 elastix guys by default gluster is replicated?
17:09 elastix I mean if I have 2 cluster in 2 servers server1 and server2 .. if the gluster directory is /glusterdir
17:09 Roland- so if someone knows what would be a guess of expected performance between 4 nodes 2 replica
17:09 elastix this glusterdir is replicated in the server1 and server2?
17:15 elastix ?
17:16 panina elastix, the docs say that glusterfs is distributed. It is replicated only if you specify it.
17:17 panina joined #gluster
17:18 ndevos_ joined #gluster
17:20 Philambdo joined #gluster
17:25 ndevos joined #gluster
17:25 ndevos joined #gluster
17:31 panina joined #gluster
17:36 ndevos joined #gluster
17:50 ndevos_ joined #gluster
17:50 ndevos_ joined #gluster
17:53 mss joined #gluster
17:53 plarsen joined #gluster
17:56 elastix if I don't specify anything where are stored by default the files?  some chunks in a server1 and some others in server2?
17:59 panina It depends on your setup. If you don't specify anything, GlusterFS will be started in distributed mode. In this mode, the files will be randomly distributed.
18:00 panina Some files on server1, and some on server2. If you have created the volume in Striped mode, it'll be as you describe. Chunks of files will be evenly distritbuted between servers.
18:00 panina ...according to the documentation, that is.
18:02 post-factum elastix: don't use striped volumes
18:02 post-factum those are deprecated
18:02 post-factum use sharding instead
18:02 Roland- oh
18:03 panina post-factum, I'm a noob here. Are the docs at gluster.readthedocs.io out-of-date?
18:04 post-factum dunno. shouldn't ne
18:04 post-factum just ask
18:04 post-factum info here is the most up-to-date one
18:05 panina post-factum, the docs do not mention sharding, they describe striped volumes.
18:06 panina post-factum and I'm afraid you are the only info in here in 5 hours, so it's a bit tricky to learn from info here.
18:06 elastix but in case of sharding or striped mode... if I go in server1 in the directory of gluster will I see all the files?
18:06 jiffin joined #gluster
18:06 panina I'm trying to find out how to optimize glusterfs as a storage backend for 3+ proxmox servers.
18:07 post-factum panina: what do you want on weekend?
18:07 post-factum you'd better drop mail to -users ML
18:07 panina post-factum up to date documentation.
18:07 post-factum panina: lel
18:07 post-factum panina: good luck
18:07 Roland- it's best practice single disk pers "brick" right? I better do a raid0 on 4 disks than add 4 bricks per node?
18:08 post-factum Roland-: it depends
18:08 post-factum elastix: via fuse mountpoint?
18:08 Roland- well I have 4 servers each having 4 drives, need only two replicas, looking for performance specially write (this is a backup cluster)
18:09 post-factum Roland-: planning shrinking/expanding?
18:09 Roland- no expanding for the moment, maybe in a year or two. definately no shrinking
18:10 post-factum Roland-: 1 disk — 1 brick
18:11 panina Why? And in when would other setups be better?
18:12 marc_888 joined #gluster
18:12 post-factum panina: because expanding is planned and because raid0 reliability is low. i'd better lose and rebuild 1 disk and not the whole raid
18:13 post-factum Roland-: replica 3 arbiter 1 would be better, though
18:13 post-factum Roland-: i'd say this layout should be default and recommended
18:14 Roland- I see, but that loses a bit of the storage space?
18:18 post-factum Roland-: yup
18:18 post-factum Roland-: ceph could be better for your setup
18:19 post-factum Roland-: just add all disks as osds, fire up replica 2 with proper crushmap, and it will do the trick
18:19 post-factum unfortunately, gluster is less flexible
18:19 panina post-factum, do you have a recomendation for me? I've got two file servers, two bricks each. They'll be serving 3 Proxmox machines (in HA setup).
18:19 elastix yes via fusse
18:19 post-factum elastix: all files
18:19 post-factum panina: yes. go and grab some beer
18:20 panina I'll be afk for 15 mins in a bit
18:20 post-factum panina: okay, then find 3rd server for ceph mon and go grab some beer
18:20 post-factum panina: what will you store there? vm images?
18:21 panina yep
18:21 panina Possibly homedirs too, but mainly VMs
18:21 post-factum panina: gluster is fs storage, not vm, unfortunately
18:21 post-factum panina: you may do that, but i'd go with ceph rbd for vm storage
18:21 post-factum panina: and homedirs on glusterfs for sure
18:21 Roland- hmz
18:22 post-factum panina: 1 tool for 1 task
18:22 Roland- could have been not using fuse my issues?
18:22 Roland- slow?
18:22 Roland- I mean, I know fuse supposed to be faster
18:22 post-factum Roland-: wut?
18:22 panina hm. I'll look into ceph. Mainly came to GlusterFS because Proxmox seems to like it.
18:22 panina Or, the proxmox people. They have native support in the dist.
18:22 post-factum panina: gluster should fine. but for vm you have to be very careful
18:23 post-factum *should be
18:23 post-factum panina: default volume options are not suitable for vm storage
18:23 panina Btw the homedirs would probably be wrapped within a VM, but I'm not set on that point.
18:23 panina Careful how?
18:23 panina Yeah, I gathered that.
18:24 post-factum panina: check /etc/gluster or /etc/glustrefs for vmstorage-specific options
18:24 post-factum don't remember exactly
18:24 panina ok
18:24 panina But you say comments in the package's files are the best source for info?
18:26 panina And ceph might be better for VM's in general? More built for the purpose?
18:28 post-factum wat? comments?
18:28 post-factum best source — read the fscking commit messages @ github
18:28 panina Aiit
18:29 post-factum panina: ceph is universal thing. not sure if cephfs is stable enough
18:29 post-factum but ceph rbd is
18:29 post-factum for sure
18:29 post-factum i'm the proof of it
18:29 panina You're replicated using ceph rbd?
18:30 post-factum yup, replica 2 for vm storage pool
18:30 post-factum but glusterfs for file storage
18:30 panina Thanks for all the input, friend
18:31 panina I'm off for now.
18:32 post-factum no friendship please
18:35 elastix but if I see all the files  how is possible that some files are in server1 and some inserver2??? and last question how can I know which files are in server1 and which are in server2?
18:48 elastix ?
18:48 jkroon joined #gluster
18:54 ndevos joined #gluster
18:55 prth joined #gluster
19:03 prth joined #gluster
19:12 panina joined #gluster
19:16 post-factum elastix: magic
19:16 post-factum just read the instruction
19:24 skoduri joined #gluster
19:25 elastix but I can
19:25 elastix I can't understand from the doc...
19:25 elastix :)
19:28 elastix :(
19:29 post-factum this is how distributed fs works
19:29 rastar joined #gluster
19:34 ZachLanich joined #gluster
19:35 prth joined #gluster
19:36 jkroon joined #gluster
19:53 Philambdo joined #gluster
20:04 jkroon joined #gluster
20:31 panina joined #gluster
20:39 jkroon joined #gluster
21:16 ZachLanich joined #gluster
21:34 ndevos joined #gluster
21:34 ndevos joined #gluster
22:06 uebera|| joined #gluster
22:10 ndevos joined #gluster
22:10 ndevos joined #gluster
22:11 Gambit15 joined #gluster
22:24 muneerse2 joined #gluster
22:31 PatNarciso_ Fellas, During normal watermark maintenance, I've got a tier'd volume, that [E]rrors 'Demotion failed', after a file [E]rrors 'Migrate file failed', after the same file [W]arns 'File has locks. Skipping file migration'... but also [E]rros 'No space left on device'.
22:32 PatNarciso_ In my case: it's OK these files didn't migrate, if there are established locks.  IMO: the file is open, so - I'm happy it's in hot tier.  During watermark maintenance, I see this as an [I]nfo message, not a W||E.  This, I suggest is an annoyance, as spawned [E]rrors should be reviewed by an admin- and this behavior seems to be normal.  On the other hand, if this was spawned during a 'tier detach', then yes: I agree this should spawn an
22:32 PatNarciso_ error, failing a successful 'tier detach' process.
22:32 PatNarciso_ but... the [E]rror 'No space left on device' is totally wrong:  all bricks (hot and cold) have plenty of space.
22:32 PatNarciso_ If this is a known issue, please lmk so I can like/subscribe.  If this is new, and you'd benefit from a submission: lmk and I'll copy/paste + add further details if suggested.
22:44 aj__ joined #gluster
23:43 ndevos joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary