Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 decay joined #gluster
00:05 shyam joined #gluster
00:14 Javezim @anoopcs Yeah I'm not losing connection
00:14 Javezim as the scripts are continously processing
00:14 Javezim Just at a super slow rate
00:38 Alghost joined #gluster
00:49 masuberu joined #gluster
00:51 masuberu joined #gluster
00:55 masuberu joined #gluster
00:59 masuberu joined #gluster
01:04 gem_ joined #gluster
01:16 derjohn_mobi joined #gluster
01:18 masuberu I have been readong this document and I think it is amazing https://s3.amazonaws.com/aws001/guided_tr​ek/Performance_in_a_Gluster_Systemv6F.pdf
01:18 masuberu the only thing is that it is from 2011, is there any updated version?
01:19 masuberu they claims that gluster performs better on small files because it does not have a metadata model
01:22 masuberu however I went to a red hat talk about storage 2 days ago and they said that ceph performs better than gluster on smaller files, but on the other hand ceph has a metadata server... so my question is why gluster used to say that metadata was a problem for small files but now ceph performs better?
01:31 Lee1092 joined #gluster
01:39 Javezim @JoeJulian I mentioned an issue yesterday where files being overwritten are duplicating, and you said you had seen this on a prior version. Did you know of a way to fix/prevent it from happening on that version that may apply here?
01:42 shdeng joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 nishanth joined #gluster
01:56 al joined #gluster
01:58 harish_ joined #gluster
02:03 om joined #gluster
02:32 JoeJulian Javezim: no, before it was a split-brain related situation where the gfid didn't match between bricks. That should no longer happen.
02:32 JoeJulian Regardless, check the ,,(extended attributes) and check the client and brick logs to begin diagnosing the problem
02:32 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/​2011/04/glusterfs-extended-attributes/
02:33 john51 joined #gluster
02:34 JoeJulian masuberu: a metadata server, being a critical point of failure, should perform better but be less reliable. Standard CAP theory.
02:53 Javezim @JoeJulian What happens if there is no second Brick trusted.afr when you are going through the following - https://gluster.readthedocs.io/en/l​atest/Troubleshooting/split-brain/
02:54 Javezim Ie. When it says - In the example volume given above, all files in brick-a will have 2 entries, one for itself and the other for the file present in its replica pair
02:54 Javezim When I run the command for the file on the brick, there is only one entry
02:54 Javezim but the file does exist on both brick
03:01 Gambit15 joined #gluster
03:08 masuberu JoeJulian: thank you that makes sense
03:11 masuberu JoeJulian: what are the bottlenecks for intensive IOPs operations (like reading and copying small files)? Is it CPU and memory?
03:26 julim joined #gluster
03:34 magrawal joined #gluster
03:35 shubhendu_ joined #gluster
03:38 RameshN joined #gluster
03:40 om Hi all, I intermittently get glusterfs-client mount point defunct.  It shows this as the mount point permissions and the glusterfs-client server needs a reboot to recover:
03:40 om mail.spectroweb.com
03:40 om whoops!
03:41 om d?????????  ? ?        ?           ?            ?
03:41 om ^ the perms look like that on the mount point.... very odd
03:41 om it typically happens when there is high IO on the glusterfs file system
03:41 om the mounted one
03:43 om using glusterfs on ubuntu 14 v. 3.7.8
03:43 om any ideas?
03:44 om I only saw one forum thread discussing this so far that I have found, and it suggest to upgrade gluster to 3.3 ...  not helpful
03:46 dnunez joined #gluster
03:46 itisravi joined #gluster
03:48 sandersr joined #gluster
03:48 nishanth joined #gluster
03:50 the-me joined #gluster
03:51 om when gfs client server is under heavy IO load on the gfs fs, it also throws input/output errors on the mount point when using ls -la
03:52 minimicro joined #gluster
03:52 thatgraemeguy joined #gluster
03:52 thatgraemeguy joined #gluster
03:52 minimicro is  rdma,tcp a valid transport identifier?
03:52 minimicro I see that tcp,rdma is
03:52 Norky joined #gluster
03:53 minimicro from reading, it seems the tcp,rdma would make TCP the default transport
03:53 minimicro and RDMA would be the backup
03:53 minimicro but I would think the opposite would be desired...
03:53 minimicro and I've had some flaky behavior when I did rdma,tcp...
03:53 coreping joined #gluster
03:53 scuttle` joined #gluster
03:54 cloph_away joined #gluster
03:56 minimicro rdma...
03:59 harish_ joined #gluster
04:01 om I use tcp. but you are right, rdma would be best for performance
04:01 om haven't tested that out yet
04:02 om http://gluster-documentations.readthedocs.io/en​/master/Administrator%20Guide/RDMA%20Transport/
04:02 glusterbot Title: RDMA Transport - Gluster-Documentations (at gluster-documentations.readthedocs.io)
04:06 atinm joined #gluster
04:11 om I read gfs client has HA with gfs server and will connect to another brick node when the current connected one is down or not performing well...  what are the heuristics that determine this exactly?
04:12 om this is related to my issue above perhaps... because the heavy IO happens and perhaps the gfs client tries to connect to yet another brick node but that brick not might not be available??  Any ideas?
04:19 nehar joined #gluster
04:21 sanoj joined #gluster
04:31 nbalacha joined #gluster
04:32 poornimag joined #gluster
04:33 itisravi joined #gluster
04:34 jiffin joined #gluster
04:38 aravindavk joined #gluster
04:42 Jacob843 joined #gluster
04:52 Javezim @anoopcs Okay Ive started noticing we cannot access the mount points whilst getting those errors in the glusterfs-<folder>.log
04:52 Javezim The logs are also spammed with - [2016-08-10 20:37:18.666667] E [MSGID: 108006] [afr-common.c:4159:afr_notify] 0-gv0mel-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.
04:52 Javezim whilst trying
04:54 hackman joined #gluster
04:56 aravindavk joined #gluster
05:00 atalur joined #gluster
05:01 crashmag joined #gluster
05:02 satya4ever joined #gluster
05:15 ppai joined #gluster
05:15 skoduri joined #gluster
05:18 karthik_ joined #gluster
05:19 nehar joined #gluster
05:24 hgowtham joined #gluster
05:28 ndarshan joined #gluster
05:29 Manikandan joined #gluster
05:34 rafi joined #gluster
05:38 Bhaskarakiran joined #gluster
05:39 Bhaskarakiran joined #gluster
05:39 aravindavk joined #gluster
05:41 kdhananjay joined #gluster
05:41 jvandewege_ joined #gluster
05:43 harish_ joined #gluster
05:43 Lee1092_ joined #gluster
05:44 ramky joined #gluster
05:47 devyani7_ joined #gluster
05:47 ankitraj joined #gluster
05:48 ueberall joined #gluster
05:48 ueberall joined #gluster
05:49 ashiq joined #gluster
05:49 atinm joined #gluster
05:58 mchangir joined #gluster
05:59 ramky joined #gluster
06:03 aravindavk joined #gluster
06:05 jtux joined #gluster
06:07 itisravi_ joined #gluster
06:08 anil joined #gluster
06:08 pur joined #gluster
06:12 Muthu_ joined #gluster
06:17 karnan joined #gluster
06:17 kshlm joined #gluster
06:19 msvbhat joined #gluster
06:23 aspandey joined #gluster
06:27 prasanth joined #gluster
06:29 arcolife joined #gluster
06:30 atalur joined #gluster
06:31 Javezim Anyone on 3.8.1 seeing alot of these when utilizing the VFS Samba share? 0-gv0mel-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up. - This is located in the share paths log under glusterfs
06:35 kotreshhr joined #gluster
06:38 anoopcs Javezim, Can you please check whether smbd process is restarting time to time when share is idle for sometime?
06:38 anoopcs You can use smbstatus to see all the connections.
06:38 itisravi joined #gluster
06:38 Muthu_ joined #gluster
06:39 Javezim Well I mean taking a look at the Connected at: Timeframes
06:39 Javezim Thu Aug 11 07:48:45 2016
06:39 Javezim Thu Aug 11 16:26:20 2016
06:39 Javezim Thu Aug 11 07:48:21 2016
06:40 Javezim These are all from this morning, if it restarted wouldn't it have killed these?
06:40 rafi joined #gluster
06:44 Saravanakmr joined #gluster
06:51 [diablo] joined #gluster
06:52 Javezim Whats strange is when I reboot the Windows machine it accesses it fast for a while, but slowly degrades
06:52 Javezim @anoopcs Is there some sort of maximum connections in Samba/VFS that has been implemented?
06:54 anoopcs Javezim, We do have max connections parameter for Samba but by default it will be 0.
06:54 anoopcs which means unlimited connections
06:55 om joined #gluster
06:59 anoopcs Javezim, Would you mind clearing out all client connections and try again?
07:00 anoopcs You may need to make sure the same by running smbstatus
07:00 anoopcs (There may be a very small delay in clearing the SMB client connection from Samba side)
07:01 harish_ joined #gluster
07:02 prasanth joined #gluster
07:02 Javezim @anoopcs Would this just be killing all smbd processes?
07:04 anoopcs Javezim, If you have access to client machines..use 'net' command from Windows power shell
07:04 devyani7 joined #gluster
07:04 devyani7 joined #gluster
07:13 skoduri joined #gluster
07:14 Javezim Yeah see if goes extremely fast when making new sessions
07:14 Javezim But over time its degrading
07:18 Javezim @anoopcs "We do have max connections parameter for Samba but by default it will be 0." Where is this set so I can make sure its 0
07:19 anoopcs Javezim, Unless you explicitly set it to a non-zero value in smb.conf you are safe to assume that its 0.
07:20 Javezim @anoopcs Okay cool
07:22 rastar joined #gluster
07:24 fsimonce joined #gluster
07:25 Javezim @anoopcs Checking smbstatus there are so many locked files hey
07:25 Javezim @anoopcs like on older versions there was usually a few when we were using our checker scripts but now there are hundreds
07:27 Javezim @anoopcs May the new VFS Not like any of these? - http://paste.ubuntu.com/23002600/
07:28 ppai joined #gluster
07:29 karthik_ joined #gluster
07:31 anoopcs Javezim, What do you mean by "new VFS"? AFAIK, vfs module for glusterfs in Samba haven't undergone any changes recently.. May be something from glusterfs side, either libgfapi or core? I would suspect core part of glusterfs if there seems to be a performance drop.
07:31 anoopcs But I am unaware of any such changes..
07:32 sanoj_ joined #gluster
07:32 Javezim @anoopcs Any idea why its keeping a lot more locks?
07:36 Javezim @anoopcs We've set it to not use VFS but fuse and its so much quicker now
07:46 post-factum Javezim: also, samba vfs uses much more memory than fuse mount because each smbd maintains its own connection to gluster cluster :(
07:47 Javezim I was taking a look at htop whilst samba vfs was being used and to be honest we wasn't near maxing the Host
07:47 Javezim Nor the Client
07:51 ppai joined #gluster
07:57 prasanth joined #gluster
08:01 jri joined #gluster
08:03 jri_ joined #gluster
08:13 ramky joined #gluster
08:21 ahino joined #gluster
08:21 ramky joined #gluster
08:27 muneerse joined #gluster
08:30 hackman joined #gluster
08:34 Philambdo joined #gluster
08:34 Muthu_ joined #gluster
08:36 Slashman joined #gluster
08:40 shaunm joined #gluster
08:52 [diablo] joined #gluster
08:55 derjohn_mobi joined #gluster
09:02 MikeLupe joined #gluster
09:19 ppai joined #gluster
09:24 shubhendu__ joined #gluster
09:37 auzty joined #gluster
09:44 prasanth joined #gluster
09:45 atalur joined #gluster
09:47 harish_ joined #gluster
09:51 karthik_ joined #gluster
09:57 devyani7 joined #gluster
09:57 devyani7 joined #gluster
10:07 atalur joined #gluster
10:12 kovshenin joined #gluster
10:12 msvbhat joined #gluster
10:17 prasanth joined #gluster
10:30 Siavash joined #gluster
10:45 jkroon joined #gluster
10:46 anil joined #gluster
10:52 Philambdo joined #gluster
10:55 wadeholler joined #gluster
10:58 jkroon gluster volume heal gv_home info split-brain - shows one gfid entry.  this points to a folder based on find, and stat on both bricks shows the data to be identical.
10:58 jkroon it wasn't but I did a chmod + touch via a mountpoint and both bricks now reflect the up to date data.
10:58 jkroon how can I clear this?
11:00 arcolife joined #gluster
11:00 jkroon gluster volume heal volname info - is it normal for this to contain a few hundred entries?  (only 1 entry if I add split-brain onto that)
11:00 ppai joined #gluster
11:01 jkroon note that on that particular volume that self-heal has been switched off due to crazy performance hits.
11:02 jkroon volume set: failed: Another transaction is in progress. Please try again after sometime - how can one know *what* transaction is in progress?
11:16 jkroon https://web.archive.org/web/20130314122636/htt​p://community.gluster.org/a/howto-targeted-sel​f-heal-repairing-less-than-the-whole-volume/ - using strace on find it looks like find already executes a stat on everything - is it really required to xargs stat the output from find?  Why not simply find ${mountpoint} &>/dev/null ?
11:16 glusterbot Title: Article: HOWTO: Targeted self-heal (repairing less than the whole volume) (at web.archive.org)
11:25 hackman joined #gluster
11:29 burn joined #gluster
11:35 lezo joined #gluster
11:37 lezo joined #gluster
11:43 mchangir joined #gluster
11:45 ghenry joined #gluster
11:45 ghenry joined #gluster
11:51 jtux joined #gluster
11:58 ppai joined #gluster
12:04 ghenry joined #gluster
12:10 kdhananjay joined #gluster
12:20 unclemarc joined #gluster
12:21 lozarythmic joined #gluster
12:23 hackman joined #gluster
12:24 lozarythmic Hi All. Upon trying to upgrade from 3.6.3 to 3.8 branch I get the following when i try to start the gluster daemon:
12:25 lozarythmic \/usr/sbin/glusterd: symbol lookup error: /usr/sbin/glusterd: undefined symbol: use_spinlocks
12:26 lozarythmic this happens on our production server. I tried installing 3.8.1/2 in an empty vm and all was fine
12:26 lozarythmic OS is Ubuntu 14.04
12:29 prasanth joined #gluster
12:30 ndevos lozarythmic: are you using the ,,(ppa) for the Ubuntu packages?
12:30 glusterbot lozarythmic: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
12:30 lozarythmic I am indeed
12:30 ndevos lozarythmic: the error sounds like you have an old libglusterfs.so.* laying around
12:31 ndevos lozarythmic: try "ldd /usr/sbin/glusterd" and see if there are some uncommon paths
12:31 lozarythmic the 3.6.3 version we had was compiled from source, but i made sure to uninstall that first
12:31 lozarythmic could that be the cause?
12:31 ndevos maybe it is not uninstalled completely
12:33 lozarythmic ah, that's looking much better. Thanks!!
12:33 bb joined #gluster
12:33 ndevos oh, good!
12:34 lozarythmic well, we're not completely back up and running yet, but at least the daemon starts :)
12:40 msvbhat joined #gluster
12:49 Philambdo joined #gluster
12:54 pdrakeweb joined #gluster
13:00 ppai joined #gluster
13:04 Manikandan joined #gluster
13:06 rwheeler joined #gluster
13:12 raghug joined #gluster
13:17 Siavash joined #gluster
13:17 decay joined #gluster
13:21 anil joined #gluster
13:21 nbalacha joined #gluster
13:21 ramky joined #gluster
13:22 julim joined #gluster
13:28 luis_silva joined #gluster
13:34 skylar joined #gluster
13:36 jiffin1 joined #gluster
13:42 arcolife joined #gluster
13:42 dnunez joined #gluster
13:48 Manikandan joined #gluster
14:00 Siavash joined #gluster
14:01 dnunez joined #gluster
14:03 shyam joined #gluster
14:05 luis_silva Hey all
14:05 luis_silva seeing some issue with locking uuid's on peers after upgrading from 3.5.3 to 3.7.13
14:06 ju5t joined #gluster
14:06 luis_silva I see an article that says we need to bump our operating-version
14:06 shyam joined #gluster
14:06 luis_silva does this sound right to you folks?
14:07 luis_silva cat /var/lib/glusterd/glusterd.info
14:07 luis_silva UUID=1e912356-96c6-4742-9def-2ba35b77453e
14:07 luis_silva operating-version=2
14:07 shyam left #gluster
14:08 luis_silva The filesystem seem functional but we can't run any gluster commands.
14:08 shyam joined #gluster
14:09 post-factum 30712 is the latest opversion for 3.7 branch
14:09 luis_silva ah ok so we should set that.
14:10 hagarth joined #gluster
14:13 luis_silva is there a place where we could have looked that up?
14:15 luis_silva are you sure this it's not 30713
14:15 luis_silva [root@sum1-gstore05 ~]# rpm -qa |grep gluster
14:15 luis_silva glusterfs-api-3.7.13-1.el6.x86_64
14:15 luis_silva glusterfs-libs-3.7.13-1.el6.x86_64
14:15 luis_silva glusterfs-server-3.7.13-1.el6.x86_64
14:15 luis_silva glusterfs-3.7.13-1.el6.x86_64
14:15 luis_silva glusterfs-cli-3.7.13-1.el6.x86_64
14:15 luis_silva glusterfs-rdma-3.7.13-1.el6.x86_64
14:15 luis_silva glusterfs-client-xlators-3.7.13-1.el6.x86_64
14:15 luis_silva glusterfs-fuse-3.7.13-1.el6.x86_64
14:18 luis_silva so gluster volume set all cluster.op-version
14:18 luis_silva gluster volume set all cluster.op-version 30712
14:23 post-factum opversion != gluster version
14:23 post-factum so yes, 30712
14:23 luis_silva ok thanks
14:23 luis_silva Anyway I can do this while the volume is running.
14:23 pdrakeweb joined #gluster
14:24 luis_silva volume set: failed: Another transaction is in progress. Please try again after sometime.
14:24 luis_silva That's the error I get when I try to set it.
14:27 bowhunter joined #gluster
14:30 post-factum either you have to wait for timeout to expire or try to restart glusterd on all nodes
14:39 shubhendu_ joined #gluster
14:41 jiffin joined #gluster
14:42 ashiq joined #gluster
14:44 kpease joined #gluster
14:45 kovshenin joined #gluster
14:51 hagarth joined #gluster
14:52 msvbhat joined #gluster
15:01 atalur_ joined #gluster
15:04 wushudoin joined #gluster
15:06 nbalacha joined #gluster
15:07 jtux joined #gluster
15:13 jdossey joined #gluster
15:14 bowhunter joined #gluster
15:16 pdrakeweb joined #gluster
15:17 luis_silva hey sorry to be a pain.
15:18 luis_silva one of our gluster nodes won't start glusterd
15:18 luis_silva [2016-08-11 15:16:10.556808] I [MSGID: 106544] [glusterd.c:159:glusterd_uuid_init] 0-management: retrieved UUID: 9061c81b-4b41-4b26-824a-108ac7cd8a75
15:18 luis_silva [2016-08-11 15:16:10.596489] E [MSGID: 106187] [glusterd-store.c:4338:glu​sterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore
15:18 luis_silva [2016-08-11 15:16:10.596537] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
15:18 luis_silva [2016-08-11 15:16:10.596565] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed
15:18 luis_silva [2016-08-11 15:16:10.596579] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
15:18 luis_silva [2016-08-11 15:16:10.596971] W [glusterfsd.c:1251:cleanup_and_exit] (-->/usr/sbin/glusterd(glu​sterfs_volumes_init+0xda) [0x405dfa] -->/usr/sbin/glusterd(glus​terfs_process_volfp+0x13a) [0x405cca] -->/usr/sbin/glusterd(cleanup_and_exit+0x6a) [0x405aea] ) 0-: received signum (1), shutting down
15:18 glusterbot luis_silva: ('s karma is now -146
15:19 B21956 joined #gluster
15:19 luis_silva not sure what to do with it.
15:20 anoopcs @paste
15:20 glusterbot anoopcs: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
15:20 anoopcs luis_silva, ^^
15:21 luis_silva oh, thx. I did not know you could do that.
15:31 ndevos termbin.com++ is amazingly helpful :)
15:31 glusterbot ndevos: termbin.com's karma is now 1
15:39 rafi1 joined #gluster
15:41 pdrakeweb joined #gluster
15:50 hagarth joined #gluster
15:53 lozarythmic I'm almost there! 6 of 8 nodes are up
15:55 lozarythmic however on 2 nodes i'm getting this: /usr/lib/x86_64-linux-glu/gluster​fs/3.8.3/xlator/mgmt/glusterd.so: undefined symbol: rpc_clnt_disconnect
15:55 lozarythmic in the vol log, and then it fails to construct the graph
15:56 lozarythmic sounds similar to my issue above, but i've gone through the libs with a fine toothed comb and they all appear to be of the correct version
15:56 Lee1092 joined #gluster
15:57 ndevos lozarythmic: old libgfrpc.so.* files, most likely
15:57 ndevos lozarythmic: there is also libgfxdr.so.*, you want to verify that one too
15:58 Siavash_ joined #gluster
15:58 lozarythmic thanks i'll take a look!
15:59 JoeJulian yay for packaging
16:00 lozarythmic I think it's because my first install was from src
16:01 JoeJulian Yeah, that would do it.
16:02 lozarythmic oddly though, i did a make uninstall on this box just like the others...
16:02 lozarythmic meh
16:06 lozarythmic yep that worked a treat, thanks ndevos
16:06 lozarythmic (on both remaining boxes)
16:40 ndevos JoeJulian: fortunately lozarythmic is moving to the packages from the ppa :)
16:42 Gambit15 Guys, can anyone tell me why the standard setup for gluster nodes adds an LVM layer, instead of putting the bricks directly on the drives/RAID volumes?
16:44 JoeJulian Where are you getting this "standard"? I use LVM when I'm going to have multiple volumes on the same storage so I can add space as it's needed.
16:47 Gambit15 Just because all of the docs, including RHs oficial procedures, all include steps for configuring logical volumes under the bricks. If I wanted to expand the gluster volume, I'd just add more bricks or throw bigger drives into the RAID volumes
16:47 JoeJulian Then don't use lvm. :P
16:47 skoduri joined #gluster
16:48 JoeJulian Red Hat documentation is only about how they build it so their 1st level support engineers know what it's going to look like and can, thus, follow a script.
16:48 shyam joined #gluster
16:48 Gambit15 And even if I wanted to have separte Gluster volumes, I'd expect it to be far more efficient to create the bricks on dedicated volumes than share volumes between bricks
16:48 JoeJulian (making assumptions about there being a 1st level and a script)
16:49 JoeJulian Depends on how you define efficiency.
16:49 Gambit15 I/O
16:49 JoeJulian $/TB is another way of looking at efficiency.
16:50 JoeJulian Good, fast, cheap - pick two. :)
16:52 Gambit15 If I wanted one gluster volume of 20TB and another of 10TB, I'd think it more efficient to have 20*2TB bricks with rep 2 for the 20TB volume & 10*2TB bricks with rep 2 for the 10TB volume. That way there's no wastage of space & the volumes don't compete for I/O...
16:53 Gambit15 If I've already got a bunch of existing servers, then I can have good, fast AND cheap...
16:58 Gambit15 The way I see it, giving volumes dedicated bricks doesn't have to impact space efficiency, and it removes contention on the array. The only situation this wouldn't be the case would be when you've got RAID volumes with huge stripe widths - but that'd be very silly, base stripe widths shouldn't really ever exceed 10-12. If you need larger volumes, put a stripe over the base RAID volume (eg. RAID 10, 50, etc)
16:59 Gambit15 In this case, instead of striping the RAID volumes, it'd be better to add each volume as a brick
16:59 Gambit15 No?
17:00 anil joined #gluster
17:02 JoeJulian Personally, I never recommend RAID unless you *need* it for throughput.
17:02 sputnik13 joined #gluster
17:02 JoeJulian And even then, I'd try for ssd journaling and/or caching first if I could.
17:03 JoeJulian When all else fails, it's nice to know that you have a whole file sitting on a drive somewhere.
17:04 Gambit15 I considered just making each individual drive a brick, but using RAID simplifies dealing with drive failures a bit & takes some of the replication load off the network. Without RAID, I'd need to use more replication more than 2
17:05 JoeJulian Aren't you using more than N/3 with replica=2+RAID?
17:06 JoeJulian s/N\/3/N*3/
17:06 glusterbot What JoeJulian meant to say was: Aren't you using more than N*3 with replica=2+RAID?
17:06 Gambit15 Yeah, but that additional replication stays local & doesn't add more load on the network
17:07 JoeJulian Do you use an arbiter to avoid split-brain?
17:08 Gambit15 Everything is on the same subnet. If the subnet breaks, everything does
17:08 JoeJulian And as far as load goes, I assume that you must have a write-heavy use-case?
17:09 shubhendu__ joined #gluster
17:10 JoeJulian Once you hit production, keep an eye on your network load. If you're less than half capacity, you could still consider migrating to replica 3. Idle network is wasted network. ;)
17:10 Gambit15 General use VMs, mostly just webhosting stuff, so random I/O & average file sizes of 4K to 1M
17:10 JoeJulian Your own web hosting, or blind hosting for other customers?
17:11 Gambit15 For the most part, I'll have control on their configurations - good, as I'll then be able to make efficient use of cahing, swapping, etc
17:11 JoeJulian Most web hosting is read-heavy and very little writing. There's seldom any reason to worry about replica 3 in those cases.
17:12 JoeJulian +1000000000 for caching. :D
17:13 JoeJulian And I'm sure you already know, don't write your session data to disk.
17:14 Gambit15 Yup. The majority of writes will be PHP caching, DB logging, & swapping. The caching & swapping can be tuned easily. DB use will be app specific, so I'll have to come back to that down the road
17:14 plarsen joined #gluster
17:15 luis_silva any directions on how to fix State: Peer Rejected (Connected) when you run gluster peer status?
17:16 Gambit15 VMs will have enough memory that I can reduce swapping to a minimum & caching will be done in RAM. I'll be putting a dedicated squid box between the servers & the firewall to further improve that
17:18 JoeJulian @learn peer rejected as https://gluster.readthedocs.io/en/latest/Admin​istrator%20Guide/Resolving%20Peer%20Rejected/
17:18 glusterbot JoeJulian: The operation succeeded.
17:18 JoeJulian @peer rejected
17:18 glusterbot JoeJulian: https://gluster.readthedocs.io/en/latest/Admin​istrator%20Guide/Resolving%20Peer%20Rejected/
17:19 JoeJulian step 6 and the last line irritate me to no end...
17:19 JoeJulian file a bug
17:19 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:20 ankitraj joined #gluster
17:23 JoeJulian https://bugzilla.redhat.co​m/show_bug.cgi?id=1366341
17:23 glusterbot Bug 1366341: medium, unspecified, ---, bugs, NEW , Provide cli tools to diagnose and repair "peer rejected"
17:24 alvinstarr joined #gluster
17:33 cliluw joined #gluster
17:36 johnmilton joined #gluster
17:40 shyam joined #gluster
17:55 Gnomethrower joined #gluster
17:57 rafi joined #gluster
18:04 julim joined #gluster
18:10 pdrakeweb joined #gluster
18:11 shubhendu__ joined #gluster
18:18 ahino joined #gluster
18:36 Manikandan joined #gluster
18:59 om joined #gluster
19:00 pdrakeweb joined #gluster
19:03 pdrakewe_ joined #gluster
19:08 gem joined #gluster
19:13 luis_silva Thanks Joe for the help, cluster is all happy again!
19:13 JoeJulian :)
19:21 ndevos JoeJulian: there isnt really a script for Red Hat support, and it is not exactlt divided in 1st level, 2nd level etc either :)
19:22 ndevos JoeJulian, Gambit15: the Red Hat docs describe the setup that Red Hat advices, has tested well, and can support with very strict SLAs
19:23 msvbhat joined #gluster
19:23 squizzi joined #gluster
19:23 ndevos the difference with the support in this channel (and in general in the community), is that "support" is used for "technically you can make it work"
19:23 ndevos that is a rather different view of "support" that Red Hat customers are paying for ;-)
19:23 JoeJulian With absolutely no guarantees, not even your money back.
19:24 JoeJulian (community support that is)
19:24 ndevos heh, yes, to state is explicitly
19:24 ndevos s/is/it/
19:24 glusterbot What ndevos meant to say was: heh, yes, to state it explicitly
19:24 JoeJulian Though I've sometimes refunded the $0 people paid me for advice.
19:24 JoeJulian No, that's not true either. They still owe me a beer.
19:25 ndevos I thought about getting something like a starbucks account, where people can donate for a coffee or so
19:26 ndevos not sure if that exists, but it would give an easy option for people to send something that is not money
19:26 JoeJulian You've seen how I do it.
19:27 JoeJulian I've gotten way more donations to cancer then I ever got in donations.
19:27 JoeJulian s/in donations/in personal donations/
19:27 glusterbot What JoeJulian meant to say was: I've gotten way more donations to cancer then I ever got in personal donations.
19:27 amye I'm pretty sure I can't do a 'not even your money back' as a community tagline — but we can certainly consider donations, other karma
19:27 amye Badges are the new hotness
19:27 JoeJulian Badges! Badges! We don't need no steenking badges!
19:27 shaunm joined #gluster
19:27 amye I thought not :)
19:28 ndevos badges dont stink, that's badgers
19:28 Gambit15 ndevos, would be nice if the RH docs were accurate though
19:28 JoeJulian I totally do this for the "Thank you"s. Nothing else motivates me.
19:28 ndevos Gambit15: they should be, at least for the version that Red Hat ships - otherwise you could open a support case or file a bug
19:28 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
19:29 JoeJulian Raul's Wild Kingdom
19:29 Gambit15 They don't cover the *creation* of the raid volumes for obvious reasons, but explain everything that needs to be configured to align with them
19:29 ndevos uh, no, NOT there, there is a "Red Hat Gluster Storage" product too
19:30 Gambit15 The problem is that whilst you can configure the underlying stripe size in LVM, you can't configure the stripe width, so it treats the stripe width as 1 & the stripe unit & stripe size as the same thing
19:30 * Gambit15 shakes fist
19:30 ndevos JoeJulian: oh, yes, the "thank you" is sufficient for me too, but a few have been asking about spending something
19:31 JoeJulian Gambit15: RH documentation is for their *downstream* product. Community documentation can be updated with a PR to https://github.com/gluster/glusterdocs
19:31 glusterbot Title: GitHub - gluster/glusterdocs: This repo had it's git history re-written on 19 May 2016. Please create a fresh fork or clone if you have an older local clone. (at github.com)
19:31 JoeJulian ndevos: See my "Donate to open cancer research" button at https://joejulian.name
19:31 glusterbot Title: JoeJulian.name (at joejulian.name)
19:32 ndevos JoeJulian: yes, I'll consider something like that too
19:32 Gambit15 IIRC, the RH docs & the gluster docs on readthedocs.io are the same
19:32 JoeJulian I love the fact that he open-licenses his research.
19:32 ndevos JoeJulian: but, I also would find it nice to send you a coffee every now and then :)
19:32 JoeJulian I'll have a coffee with you. Come on over.
19:33 ndevos uhm, yes, it'll be a rather expensive cup for me
19:33 JoeJulian Someone started offering me a job in your neck of the woods, but then they took a long vacation and never got back to me.
19:33 Gambit15 Plus, the docs detail the tuning options more & I'll be playing with oVirt which is also a RH project now
19:33 post-factum i hope we'll meet in Brno and have something stronger than coffee
19:34 Gambit15 Nice cuppa' tea?
19:34 post-factum sure, tea
19:34 Gambit15 Heroin?
19:34 post-factum not that strong
19:34 JoeJulian That went downhill fast.
19:34 Gambit15 hmmm
19:34 Gambit15 heh
19:34 ndevos post-factum: I like to go to Brno again, I expect I'll be there for DevConf.cz, if not earlier
19:34 Gambit15 Tense work in this industry!
19:35 JoeJulian I'd like to have gone last time. All my conference time and budget are tied up this year. Maybe something next.
19:35 Gambit15 Where are you guys based?
19:35 amye DevConf is now end of January, directly before FOSDEM
19:36 post-factum ndevos: looking forward to relocate soon. shitty bureaucracy
19:36 JoeJulian <- Seattle
19:36 * post-factum is in Kyiv, Ukraine now
19:36 * ndevos Zaandam, but moving to Amsterdam in a few months
19:37 amye < — Pacific timezone
19:37 Gambit15 Aha. Londoner originally, although found myself in Brazillian jungle recently
19:37 post-factum suddenly?
19:38 amye on purpose?
19:38 ndevos I hope it was expected, and not just happened by taking a wrong turn somewhere
19:38 post-factum without regaining consciousness?
19:38 Gambit15 Went "travelling" and got side-tracked...
19:40 ndevos sort of a wrong turn then
19:41 post-factum ndevos: moving from Zaandam to Amsterdam is not *that* long trip
19:41 ndevos post-factum: no, its even in the same timezone!
19:42 Gambit15 Amsterdam is an awesome place to live.
19:42 post-factum magic mushrooms, you know...
19:43 Gambit15 Beyond that stuff. Very scenic, great night life & culture, and yeah...quite liberal
19:44 Gambit15 Spent 6 months working there a couple of years back. 45min flight from London - became my daily commute for most of that time
19:44 ndevos yes, I'm looking forward to it, the village where I live now is really boring, there are a few places where I can work and have coffee, like 3 of them
19:44 plarsen joined #gluster
19:45 Gambit15 Flying to Amsterdam from London was quicker than trying to *cross* London!
19:45 amye Ha, yes.
19:45 ndevos hehe, yes, but you still need to get to an airport in london
19:46 Gambit15 London City airport is small & mostly used for business. 30 mins to get to the gate from leaving the taxi
19:46 post-factum there is RH office in Amster
19:47 post-factum what do they do there?
19:47 Gambit15 Less than that even, IIRC
19:47 Gambit15 Drinks lots of "coffee"?
19:48 ndevos the office is mostly sales and marketing people, a few consultants and solution architects if they are not at customers
19:48 ndevos I could probably work from the office, but well...
19:49 post-factum i believe PR office is not the best place for engineer to work
19:50 ndevos not really, it is very distracting, but it would encourage some fixed working hours
19:51 * ndevos looks at the clock, and will go find some late dinner
19:51 pdrakeweb joined #gluster
19:53 jiffin joined #gluster
19:54 Gambit15 Heh, that's one thing to be careful of in Ams. I remember leaving late one night & going into the city around 10pm to find a restaurant to eat. All closed! The only places I found selling food were the coffee shops :/
19:54 Gambit15 I was expecting somewhere like that to serve throughout the night
19:55 post-factum it is Europe, dude
19:55 ndevos yeah, many restaurants or also kitchens close at 22:00 :-/
19:55 Gambit15 London is open 24/7
19:55 post-factum that is why brexit happened
19:57 Gambit15 Brexit happened because them rural folk like to "keep it in the family"
19:57 hagarth joined #gluster
19:58 Gambit15 London & Sctoland versus the rest of the country, and we lost by 1.8%!
20:01 post-factum silly result for such an important decision. should be 75%+ to consider it
20:02 Gambit15 Whilst I'd like to agree, I'm not sure. If you think about it, not even our governing political parties get that much
20:02 Gambit15 IIRC, the majority vote for our current government was something like 30%
20:03 post-factum laziness outperforms politics
20:03 JoeJulian I don't mine the OT conversation, but if somebody asks a question, make sure they get priority please.
20:07 * Gambit15 gets back to work
20:08 arif-ali joined #gluster
20:08 * post-factum waits for late night to update one gluster node
20:19 derjohn_mobi joined #gluster
20:25 Gambit15 Guys, the data & engine volumes, is it a problem if they're on the same volume?
20:26 post-factum Gambit15: engine?
20:26 Gambit15 As in, rather than creating two LVM volumes, I just mount them both on two points in a single volume
20:27 hagarth post-factum: related to ovirt-engine i suppose
20:28 Gambit15 oops! Yeah, sorry, just clicked.
20:33 om guys, I have massive performance issues right now.  a simple rsync on the same fs mount from one dir to another is maxing out at 380 Kbps
20:33 om here is my configured options:
20:33 om https://gist.github.com/andrebron​/bd2bb6c6ec7f3ba83e827fd74d70b128
20:33 glusterbot Title: gist:bd2bb6c6ec7f3ba83e827fd74d70b128 · GitHub (at gist.github.com)
20:34 om glusterfs 3.7.8 on ubuntu 14.04
20:34 om any ideas?
20:36 post-factum iirc, 3.7.8 had performance issues related to write-behind
20:36 post-factum either update your setup to most recent gluster version or at least try to disable write-behind temporary
20:42 shyam joined #gluster
20:51 pdrakeweb joined #gluster
21:11 rafi joined #gluster
21:19 om thanks post-factum
21:19 om what's the syntax to disable performance.write-behind ?
21:20 JoeJulian om "gluster volume set help" ;)
21:22 om thanks!
22:20 shyam joined #gluster
22:31 wadeholler joined #gluster
22:35 Siavash__ joined #gluster
22:41 Siavash___ joined #gluster
22:42 wadeholler joined #gluster
22:51 Siavash___ joined #gluster
23:04 arif-ali joined #gluster
23:08 plarsen joined #gluster
23:24 shyam joined #gluster
23:34 arif-ali joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary