Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 baber joined #gluster
00:28 msvbhat joined #gluster
00:35 jkroon joined #gluster
01:10 cyberbootje joined #gluster
01:41 baber joined #gluster
02:13 prasanth joined #gluster
02:18 daMaestro joined #gluster
02:27 hmamtora joined #gluster
02:30 msvbhat joined #gluster
02:35 vbellur joined #gluster
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:21 Shu6h3ndu joined #gluster
03:31 gyadav__ joined #gluster
03:50 kramdoss__ joined #gluster
04:28 karthik_us joined #gluster
04:31 sanoj joined #gluster
04:32 msvbhat joined #gluster
04:37 itisravi joined #gluster
04:45 skumar joined #gluster
04:47 omie888777 joined #gluster
04:57 rwheeler joined #gluster
05:01 atinm joined #gluster
05:12 hgowtham joined #gluster
05:20 sunnyk joined #gluster
05:21 skoduri joined #gluster
05:24 ndarshan joined #gluster
05:52 kdhananjay joined #gluster
05:56 psony joined #gluster
06:05 apandey joined #gluster
06:22 poornima_ joined #gluster
06:25 xavih joined #gluster
06:26 Saravanakmr joined #gluster
06:31 Prasad joined #gluster
06:32 shyu joined #gluster
06:38 Prasad_ joined #gluster
06:44 msvbhat joined #gluster
06:57 kdhananjay1 joined #gluster
07:06 kdhananjay joined #gluster
07:10 marbu joined #gluster
07:20 jtux joined #gluster
07:21 kramdoss__ joined #gluster
07:27 rwheeler joined #gluster
07:32 omie888777 joined #gluster
07:33 jtux joined #gluster
07:41 prasanth joined #gluster
07:43 kramdoss__ joined #gluster
07:53 lalatenduM joined #gluster
08:03 fsimonce joined #gluster
08:07 omie888777 joined #gluster
08:15 ThHirsch joined #gluster
08:36 sanoj joined #gluster
08:39 gyadav joined #gluster
08:44 karthik_us joined #gluster
08:50 sunkumar joined #gluster
09:02 kotreshhr joined #gluster
09:04 buvanesh_kumar joined #gluster
09:12 ppai joined #gluster
09:12 msvbhat joined #gluster
09:14 anoopcs joined #gluster
09:33 aravindavk joined #gluster
09:34 legreffier joined #gluster
09:36 ThHirsch joined #gluster
09:38 jiffin joined #gluster
09:53 Humble joined #gluster
09:57 rafi joined #gluster
10:01 _KaszpiR_ joined #gluster
10:13 kdhananjay joined #gluster
10:15 kdhananjay1 joined #gluster
10:22 flomko joined #gluster
10:23 apandey joined #gluster
10:33 rastar joined #gluster
10:38 Acinonyx joined #gluster
10:42 rastar_ joined #gluster
10:45 kdhananjay joined #gluster
10:50 kdhananjay1 joined #gluster
10:54 MrAbaddon joined #gluster
10:54 Saravanakmr joined #gluster
10:57 ThHirsch joined #gluster
10:59 kdhananjay joined #gluster
11:12 kdhananjay1 joined #gluster
11:21 kdhananjay joined #gluster
11:22 ndarshan joined #gluster
11:31 kdhananjay joined #gluster
11:36 mrcirca_ocean Hello, how can i balance bandwith with replication of 3 volumes
11:45 Prasad joined #gluster
11:46 Klas baöance bandwidth?
11:52 kdhananjay1 joined #gluster
11:52 msvbhat joined #gluster
11:53 mrcirca_ocean Klas: yes! Lets say that i have 1 Gbit connection and i want to replicate 3 volume. 300Mbps for each volume
11:55 Klas huh, network limitations per volume
11:55 ic0n joined #gluster
11:57 Klas any possible solution is beyond me, at least as long as the clients aren't on different subnets and you have external bandwidth control
11:57 Klas sounds like a QoS issue, and that is definitely not my field ;)
12:00 mrcirca_ocean Klas: exactly network limit rate
12:05 omie888777 @mrcirca_ocean I think Klas is right, that logic is not part of glusterfs but rather your networking QoS
12:07 MrAbaddon joined #gluster
12:17 msvbhat joined #gluster
12:19 atinm joined #gluster
12:19 kdhananjay joined #gluster
12:20 om2 joined #gluster
12:31 kdhananjay joined #gluster
12:35 JeroenDV joined #gluster
12:37 mrcirca_ocean omie888777: lets say that we have three nodes. First node has 2 volumes, and replicates each volume to each node accordingly. Has gluster the feature to limit network bandwith for each volume?
12:39 Klas gluster is pretty much a KISS solution
12:39 Klas those types of things seem more like overdesigned products =P
12:43 dgandhi joined #gluster
12:45 JeroenDV Hi! One of my colleagues left me with a present before he left on holidays. He migrated one of our gluster instances towards Centos 7 (= took a new server, created a brick & volume on it, migrated the clients & used the same mountpoint for the new volume). However, on the clients, I can now only see the folders one level under the mount point. Anything deeper seems to be hidden somehow. So, if the mount point is /cluster/abc, I do
12:45 mrcirca_ocean so if you run other services which demand network bandwith will be occured network overhead right?
12:46 Klas JeroenDV: your message ended on "So, if the mount point is /cluster/abc, I do "
12:47 JeroenDV So, if the mount point is /cluster/abc, I do see folders in abc, like /clusters/abc/def, but nothing inside def. While, if I look on the brick itself, there is content in def. Also, when I know the path to choose, everything works fine, so I can do something like cat /cluster/abc/def/testfile.txt. That gives me the actual file content. I tried chmodding stuff to 777 for these purposes, but that doens’t do anything. Has anyone go
12:47 Klas "Has anyone got"
12:47 kdhananjay joined #gluster
12:48 JeroenDV Sorry, here's the rest
12:48 phlogistonjohn joined #gluster
12:48 JeroenDV The last part was just: has anyone got any idea what I can do here?
12:49 Klas plot revenge
12:49 Klas !
12:49 Klas ;)
12:49 JeroenDV Will surely do that :P
12:50 JeroenDV But besides that, I'd like to be able to tell him I fixed his problems when he's back ;)
12:50 Klas understandable ;)
12:51 Klas hmm, are all brick paths fine (on all servers)?
12:52 Klas I think I saw something akin to this behaviour when I intentionally tried breaking things in lab setting and partly succeeded.
12:52 JeroenDV What do you mean exactly? It's a simple 1 brick 1 volume setup (it's in one of our test environments) What paths should I check?
12:52 Klas but I think I was in a split-brain situation
12:52 Klas oh
12:52 Klas single-node
12:52 Klas nm then
12:52 Klas since it's in test, have you tried mounting it with nfs and smb as well?
12:53 JeroenDV yea... I tried running self heal, but that doesn't seem to work on single node, right?
12:53 Klas nothing to compare against, so, no
12:53 Klas heal is always against another part of the replica
12:54 JeroenDV hmm, no, not really, could try to do that. The environment is not for testing gluster thought, it's for functional testing of our software, so I can't destroy to much stuff. However, I can try adding an extra NFS mount.
12:55 ic0n joined #gluster
12:55 swebb joined #gluster
13:04 kramdoss__ joined #gluster
13:18 rwheeler joined #gluster
13:21 kotreshhr left #gluster
13:27 boutcheee520 joined #gluster
13:30 dkossako joined #gluster
13:39 pladd joined #gluster
13:40 shyu joined #gluster
13:41 pladd_ joined #gluster
13:44 hmamtora joined #gluster
14:02 bwerthmann joined #gluster
14:06 atinm joined #gluster
14:07 buvanesh_kumar joined #gluster
14:15 msvbhat joined #gluster
14:22 rastar_ joined #gluster
14:24 Prasad joined #gluster
14:44 ppai joined #gluster
14:48 rwheeler joined #gluster
14:54 plarsen joined #gluster
14:59 farhorizon joined #gluster
15:00 russoisraeli joined #gluster
15:00 kpease_ joined #gluster
15:04 russoisraeli Yesterday I moved my 3-replica cluster to a 2x2 distributed-replicate cluster. I am not sure if the rebalance operation works/worked.... it appears to be stuck - https://dpaste.de/sHv5/raw
15:04 russoisraeli any idea why?
15:04 russoisraeli This is Gluster 3.6.5 on Gentoo
15:04 russoisraeli all bricks same
15:05 jbrooks joined #gluster
15:10 wushudoin joined #gluster
15:13 bwerthmann joined #gluster
15:23 rastar_ joined #gluster
15:27 badf1sh joined #gluster
15:28 badf1sh hi! we have a program that maxes out the IO read capabilities of a single server, and were evaluating glusterfs as a way to scale that process and reach significantly higher read performance as we add more bricks. is this a good use case for gluster?
15:31 farhorizon joined #gluster
15:31 ndevos badf1sh: possibly, but your program at least needs to be multi-threaded to get benefits from multiple bricks
15:31 vbellur joined #gluster
15:33 vbellur1 joined #gluster
15:34 phlogistonjohn joined #gluster
15:34 vbellur joined #gluster
15:36 vbellur joined #gluster
15:36 vbellur joined #gluster
15:37 vbellur joined #gluster
15:38 vbellur joined #gluster
15:45 pladd_ joined #gluster
16:03 badf1sh ndevos: it is
16:03 badf1sh ndevos: any gotchas when trying to solve for this sort of problem?
16:03 badf1sh ndevos: most of the docs / tutorials i see are for setting up replicated/striped environments
16:03 badf1sh ndevos: so i was worried this may be a minimal use case
16:07 kramdoss__ joined #gluster
16:14 ndevos badf1sh: you could try it out with replicated (3x) and sharding enabled, that way the reading of the shards (assuming you have a large file) will be distributed over the 3 bricks
16:14 ndevos ~sharding | badf1sh
16:14 glusterbot badf1sh: for more details about sharding, see http://blog.gluster.org/2015/12/introducing-shard-translator
16:15 ndevos dont use 'stripe', that is deprecated and not maintained anymore
16:15 * ndevos drops off, ttyl!
16:18 badf1sh got it, thanks
16:37 xMopxShell joined #gluster
16:48 MrAbaddon joined #gluster
16:53 russoisraeli Yesterday I moved my 3-replica cluster to a 2x2 distributed-replicate cluster. I am not sure if the rebalance operation works/worked.... it appears to be stuck - https://dpaste.de/sHv5/raw   . This is Gluster 3.6.5 on Gentoo. Can anyone please help?
16:56 gyadav joined #gluster
16:56 gyadav__ joined #gluster
16:57 jkroon joined #gluster
17:17 rwheeler joined #gluster
17:24 MrAbaddon joined #gluster
17:50 MrAbaddon joined #gluster
18:12 jbrooks joined #gluster
18:19 MrAbaddon joined #gluster
18:30 Telsin Any infra folk around? getting "<mailto:gluster-devel@gluster.org>... Deferred: Connection timed out with http://mx1.gluster.org/." on an email
18:31 pladd__ joined #gluster
18:38 int-0x21 Hi, if im doing a gluster on zfs (zfs pool is 3+1 raidz1 nvme on 2 servers, seperate server will do arbiter role) is ther eanything i need to think about with gluster pool creation ?
18:39 int-0x21 Should i do blocks formated as xfs or ? Should i make multiple bricks in the pool ?
18:39 Telsin I do sub volumes in the pool as bricks, no extra formatting needed
18:40 Telsin check the wiki for ZFS though, there's a couple local vars you want to set on the pool you're using as bricks
18:41 int-0x21 Yea saw that but it just said "Go on and create pools" so i was a little confused :)
18:43 russoisraeli What's currently considered a stable version of Gluster, ready for Production usage?
18:56 int-0x21 Does anyone have experience of running multipath iscsi on gluster ? Just wondering if there are any sucess stories (My setup will be replicate 3 with arbiter 3+1 nvme per host)
19:03 [diablo] joined #gluster
19:04 shyam joined #gluster
19:16 semiosis joined #gluster
19:16 farhorizon joined #gluster
19:19 farhorizon joined #gluster
19:22 bwerthmann joined #gluster
19:47 ThHirsch joined #gluster
19:52 hmamtora joined #gluster
19:55 MrAbaddon joined #gluster
20:05 koolfy joined #gluster
20:26 int-0x21 Uhm something im not entierly clear on, if i do "gluster volume create testvol replica 3 arbiter 1 server1:brick0 server2:brick0 server3:brick0 server1:brick1 server2:brick01 server3:brick1" for example
20:27 int-0x21 Can a single file be bigger then a single brick, ie does it spread on volume or am i limited to individual bricks as max file size
20:27 int-0x21 Im doing a datastore for nfs so a single vmdk can in many cases be bigger then a single brick
20:27 JoeJulian You are limited. If you want to exceed that you'll need to use ... (one sec... I always forget which keyword that is)
20:29 JoeJulian sharding
20:29 int-0x21 Sharding breaks up data as blocks instead of files ? (sort of)
20:31 int-0x21 Ah so i create the volume the same way with replica 3 arbiter 1 and then set features.shard on
20:31 int-0x21 And then incomming files will be broken in blocks instead of huge files
20:32 JoeJulian yes
20:34 int-0x21 Excelent :) thank you :) Since almost all of my data will be big files this should enable me to not endup with big gaping holes in the disks :) and also allow me to do a 1TB database on my 960GB nvme disks ;)
20:35 _KaszpiR_ joined #gluster
20:38 JoeJulian +1!
20:46 int-0x21 I do not understand there still is a bug that vlan MTU is not set properly on boot, cannot for the life of me remeber how to set a wait on the vlan now either so parent can settle before it tries to set the mty
20:46 JoeJulian Is this using systemd-networkd?
20:47 int-0x21 Yea
20:48 JoeJulian I just set the mtu on all netdev
20:49 int-0x21 How do you meen ?
20:56 int-0x21 Or im confused maybe, centos 7 is using NetworkManager
21:10 JoeJulian Oh, well the first thing I do is disable NM and enable systemd-networkd. `man systemd.netdev` `MTUBytes=9000`
21:10 JoeJulian networkd is much more configurable than NM.
21:12 int-0x21 I think that will be what i do to my morning coffee, Usualy i do teaming and then this works.
21:14 int-0x21 Thanks again :)
21:42 msvbhat joined #gluster
22:01 vbellur joined #gluster
22:02 vbellur1 joined #gluster
22:02 hmamtora joined #gluster
22:03 gbox JoeJulian: I see in the channel logs you endorsed a "many bricks per peer" JBOD approach (distributed, replicated volumes?)  Any issues with non-uniform brick sizes?  So DHT and network cause most of the latency?
22:08 JoeJulian I've always kept my brick sizes uniform (except the one time I was forced not to. Hated every minute of that).
22:08 gbox Thanks.  hilarious!
22:08 JoeJulian Use lvm (or something) if you need to to create uniform partitions
22:18 boutcheee520 joined #gluster
22:31 jbrooks joined #gluster
22:45 baber joined #gluster
23:10 baber joined #gluster
23:17 baber joined #gluster
23:35 map1541 joined #gluster
23:46 msvbhat joined #gluster
23:55 baber joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary