Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:07 shdeng joined #gluster
01:38 xMopxShell Can gluster be upgraded directly from 3.7 to 3.10 or step by step needed?
01:42 vbellur joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:53 Philambdo joined #gluster
02:22 Philambdo joined #gluster
02:27 JoeJulian xMopxShell: It should be possible. It was designed for that to work but as always you should test your use case first.
02:27 xMopxShell gotcha, thanks!
02:57 kramdoss_ joined #gluster
03:04 nbalacha joined #gluster
03:05 Gambit15 joined #gluster
03:18 KoSoVaR are there any general guidelines for spec'ing out hardware?  if i wanted to buy 2U chassis full of 10TB 7200RPM drives.. where do I need to be with CPUs/RAM and hot tier NVMe drives to get 1Gbps throughput?  i guess, generally speaking... how can I calculate something like this
03:42 magrawal joined #gluster
03:46 skoduri joined #gluster
03:58 susant joined #gluster
04:05 dominicpg joined #gluster
04:05 raghug joined #gluster
04:06 aravindavk joined #gluster
04:07 atinm joined #gluster
04:32 Saravanakmr joined #gluster
04:35 Shu6h3ndu joined #gluster
04:43 ppai joined #gluster
04:44 skumar joined #gluster
04:45 gyadav joined #gluster
04:49 buvanesh_kumar joined #gluster
04:53 ankitr joined #gluster
04:59 ankitr joined #gluster
05:10 ndarshan joined #gluster
05:20 sanoj joined #gluster
05:33 karthik_us joined #gluster
05:36 kdhananjay joined #gluster
05:37 apandey joined #gluster
05:37 prasanth joined #gluster
05:44 ankitr joined #gluster
05:47 hgowtham joined #gluster
06:02 skoduri joined #gluster
06:15 Karan joined #gluster
06:18 kramdoss_ joined #gluster
06:19 ppai joined #gluster
06:19 sbulage joined #gluster
06:28 sona joined #gluster
06:30 mbukatov joined #gluster
06:30 susant joined #gluster
06:31 kotreshhr joined #gluster
06:34 Humble joined #gluster
06:35 skoduri joined #gluster
06:36 sbulage joined #gluster
06:49 msvbhat joined #gluster
06:51 rafi joined #gluster
06:54 armyriad joined #gluster
06:57 rastar joined #gluster
07:00 jiffin joined #gluster
07:04 ayaz joined #gluster
07:05 jwd joined #gluster
07:06 bartden joined #gluster
07:09 bartden Hi, Is it possible that gluster has some issues when a file name is reused within a short period of time? Meaning, if file A is created in dir b and deleted and afterwards file A is recreated in dir B sometimes file A is corrupted. Is there any form of caching that i can adjust to minimize the issue?
07:21 amarts joined #gluster
07:28 prasanth joined #gluster
07:35 ashiq joined #gluster
07:39 fsimonce joined #gluster
07:42 Saravanakmr joined #gluster
07:54 ekarlso any of you using gdeploy ?
07:54 jiffin sac`: ^^
07:54 amarts ekarlso, what you wanted to know about it?
07:55 kramdoss_ joined #gluster
07:57 percevalbot joined #gluster
08:10 susant left #gluster
08:11 susant joined #gluster
08:16 Prasad_ joined #gluster
08:20 amarts joined #gluster
08:31 skoduri joined #gluster
09:00 poornima_ joined #gluster
09:08 derjohn_mob joined #gluster
09:10 ankitr joined #gluster
09:28 nbalacha joined #gluster
09:29 flying joined #gluster
09:46 gyadav joined #gluster
09:56 sona joined #gluster
10:09 kharloss joined #gluster
10:10 msvbhat joined #gluster
10:21 gyadav_ joined #gluster
10:25 sona joined #gluster
10:27 gyadav__ joined #gluster
10:46 jkroon joined #gluster
10:56 amarts joined #gluster
11:01 kdhananjay1 joined #gluster
11:05 Prasad_ joined #gluster
11:09 msvbhat joined #gluster
11:20 p7mo joined #gluster
11:27 kotreshhr left #gluster
11:44 TBlaar joined #gluster
11:54 Shu6h3ndu joined #gluster
11:57 nbalacha joined #gluster
12:14 DV joined #gluster
12:18 ira joined #gluster
12:27 msvbhat joined #gluster
12:46 magrawal joined #gluster
12:58 KoSoVaR really looking for some help to spec out a general use cluster.. sweet spots for cpu/ram, if i'm using an LSI 3108, where the bottlenecks are, how many disks i should cram in to a chassis.. etc.  looking to build out a new 1PB cluster
12:59 KoSoVaR ideally dense, so 10TB 3.5" 7200RPM disks are the drive of choice here... how i can calculate throughput if i'm doing raid6 and gluster 2 copy across 14 or so nodes
12:59 KoSoVaR or less nodes if i can do those supermicro 60-90 bay chassis... without creeating bottlenecks
13:09 baber joined #gluster
13:16 cloph KoSoVaR: do you *need* raid6 and additionally replication in gluster? Why not use a raid10 and gluster replication instead?
13:22 shyam joined #gluster
13:29 plarsen joined #gluster
13:33 kramdoss_ joined #gluster
13:34 vbellur joined #gluster
13:34 laurent_ joined #gluster
13:35 Ashutto joined #gluster
13:35 vbellur joined #gluster
13:35 Ashutto Hello
13:35 glusterbot Ashutto: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
13:35 laurent_ Hi everyone. I noticed the download site is down (https://download.gluster.org/), but I cannot find news about this incident
13:36 Ashutto when i try to mount my volume using ganesha, it says access denied by server but ganesha doesn't log a thing. Firewall seems ok as i have no "SYN-SENT" packets
13:36 sanoj joined #gluster
13:36 Ashutto how can i rise the error level ? is this a common Error that i can workaround/solve?
13:37 Ashutto https://nopaste.me/view/85efbee8 this is my ganesha.conf
13:37 glusterbot Title: ganesha.conf - Nopaste.me (at nopaste.me)
13:38 laurent_ I'm stuck for my server provisioning. Do you guyz know a mirror/fallback for the download site?
13:38 laurent_ https://download.gluster.org/
13:39 Ashutto which packages? You can try the SIG Storage for Redhat/Centos
13:41 laurent_ I'm searching for 3.7.15 / Debian
13:42 daveys110 joined #gluster
13:42 Ashutto i'm unaware of how debian works. Have you tried this link? https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7
13:42 glusterbot Title: glusterfs-3.7 : “Gluster” team (at launchpad.net)
13:50 daveys110 Is there an ETA on when https://download.gluster.org will be back?
13:57 laurent_ Good idea, I'll try the PPA, although PPA targets Ubuntu, it can work on production. Lesson learned: always keep a copy of your critical packages in a private repo :)
13:57 kpease joined #gluster
14:06 dominicpg joined #gluster
14:07 hajoucha joined #gluster
14:09 Saravanakmr joined #gluster
14:10 vbellur joined #gluster
14:10 vbellur joined #gluster
14:11 vbellur joined #gluster
14:11 KoSoVaR cloph initial spec is what i'm doing, the sizing guides say to go raid6 for general use case... but again, that's what i'm trying to figure out.  also wouldn't i be getting like ~25% useable (50% raid10, 50% replication = 25% vs ~80% raid6, 50% replication = 40% ?)
14:11 vbellur joined #gluster
14:12 KoSoVaR cloph i'm referring to page 4 of this doc https://www.redhat.com/cms/managed-files/st-RHGS-QCT-config-size-guide-technology-detail-INC0436676-201608-en.pdf
14:12 vbellur joined #gluster
14:13 percevalbot joined #gluster
14:29 _ndevos joined #gluster
14:29 _ndevos joined #gluster
14:36 farhorizon joined #gluster
14:36 raghug joined #gluster
14:37 skylar joined #gluster
14:38 oajs joined #gluster
14:50 vezult left #gluster
14:54 nbalacha joined #gluster
15:00 laurent_ Just noticed that download.gluster.org is up
15:10 oajs joined #gluster
15:11 wushudoin joined #gluster
15:16 shyam joined #gluster
15:19 budric[m] joined #gluster
15:29 budric[m] hi, I'm trying to add-brick to a volume and set replica to 2.  The new peer doesn't start syncing existing content in the volume immediately.  In fact if I mount the volume using glusterfs fuse filesystem pointing to new peer the contents are empty.  How can I force the data to be replicated, and how can I monitor the progress so I can conclude when it's safe to mount volumes from the new peer?
15:38 ic0n joined #gluster
15:42 ic0n joined #gluster
15:48 shyam joined #gluster
16:07 skoduri joined #gluster
16:10 gem joined #gluster
16:18 sanoj joined #gluster
16:22 rwheeler joined #gluster
16:28 gem joined #gluster
16:30 Gambit15 joined #gluster
16:33 JoeJulian budric[m]: Unless something is wrong with your network, it shouldn't matter which server you mount from, you should always get the same results. The heal should start immediately. Check you logs.
16:34 KoSoVaR Anyone on spec'ing out hardware? :)
16:34 JoeJulian Not I. There are too many variables and several of those are opinions.
16:35 JoeJulian I wouldn't choose either option as I've hit the "too many eggs in one basket" problem and there was no way to recover (it was ceph, but still).
16:38 KoSoVaR What is your opinion, then :)  of course if you don't mind sharing
16:39 farhorizon joined #gluster
16:40 JoeJulian As much as a hate the buzzword, converged systems, storage and compute in the same package, offers a greater ability to be both performant and fault tolerant.
16:41 JoeJulian ... it's also often less expensive overall.
16:57 jiffin joined #gluster
17:01 KoSoVaR buzzwords are great :p  yes, makes sense, but don't you think the too many eggs in one basket boils down to trusting a vendor on the platform you're being given?  i.e. a fully HP converged platform is all HP?  even the networking?
17:02 farhorizon joined #gluster
17:04 budric[m] JoeJulian: i see, good to know that's not how it's intended to work, i'll check the configuration
17:09 amarts joined #gluster
17:11 msvbhat joined #gluster
17:14 gyadav__ joined #gluster
17:21 JoeJulian KoSoVaR: I had trust. We did tests. We thought they were exhaustive. When the lsi expander started failing under high loads once we had actual customers on it, we lost all their data. (I was not a happy camper). We had vendors fly out and try to fix it. After a week, it became cheaper to pay off the customer than it would have been to even have a chance at restoring the data.
17:23 KoSoVaR Damn.  That's deep...
17:29 chawlanikhil24 joined #gluster
17:32 Ashutto Hello, where can I find an up-to-date port list on which gluster (and ganesha) comunicates on ? ganesha is not updated as I can tell as the brick runs on 35xxx port while documentation says 49xxx
17:34 chawlanikhil24 joined #gluster
17:34 kkeithley @ports
17:34 glusterbot kkeithley: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
17:35 chawlanikhil24 joined #gluster
17:38 kkeithley ganesha uses standard NFS ports, i.e. 2049.  for nfs-rquota, 875.
17:39 jiffin joined #gluster
17:43 ashiq joined #gluster
17:44 gem joined #gluster
17:46 farhorizon joined #gluster
17:49 cloph_away joined #gluster
17:55 gyadav joined #gluster
18:01 ankitr joined #gluster
18:06 farhorizon joined #gluster
18:16 baber joined #gluster
18:36 ekarlso fatal: [ovhost01]: FAILED! => {"changed": false, "failed": true, "msg": "number of bricks is not a multiple of replica count\nUsage: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]\n", "rc": 1}
18:36 ekarlso hmmm I have 8 peers / nodes and set replica count to 3 and arbiter to 1, is that wrong ?
18:50 Saravanakmr joined #gluster
18:54 ekarlso sooo quiet ;)
18:54 JoeJulian You asked during standup.
18:54 JoeJulian And lunch... ;)
18:55 ekarlso hah :p
18:55 ekarlso seems I need to do replica 2 and arbiter 0 ?
18:56 JoeJulian "replica 3 arbiter 1" actually means you have two replicas and a non-storage arbiter as the third. This allows the prevention of split-brain by arbitration.
18:56 JoeJulian The arbiter does store a small amount of metadata but does not need to store whole files.
18:56 ekarlso JoeJulian: hmmm yeah but it doent work since replica needs to be a multiple of bricks ?
18:59 JoeJulian To build a distributed replica 3 arbiter 1 you need a multiple of 3 bricks (one of them doesn't require much storage). With 8 servers, you could do them in a round-robin configuration, ie. server1:/data/brick server2:/data/brick server3:/meta/arbiter server3:/data/brick server4:/data/brick server5:/meta/arbiter ... etc.
19:00 ekarlso JoeJulian: hmmms trying to use gdeploy then but that's not possible or ?
19:00 JoeJulian I've never used gdeploy.
19:00 ekarlso darn : o
19:00 JoeJulian Seems odd to wrap something as simple as the gluster cli with something to make it even more simple.
19:01 ekarlso I dunno JoeJulian I just used th tool mentioned by some posts :)
19:01 JoeJulian I'm not judging.
19:02 ekarlso JoeJulian: how big should the meta/arbiter mount be then if the bricks are 25g ?
19:03 JoeJulian 1 inode per dirent, roughly.
19:03 ekarlso JoeJulian: hmmmm doesnt say me mucj : o
19:04 JoeJulian Depends on the size of the inode and the number of files. The size of the storage is irrelevant for an arbiter.
19:04 ekarlso JoeJulian: very little file count basically, it's vm storage for the ovirt engine
19:05 JoeJulian An inode is generally 512 bytes by default.
19:05 ekarlso hmmmm, JoeJulian any info I could provide to get some more info on how to setup ?
19:07 JoeJulian Seems pretty straghtforward to me. I would do replica 3 either with or without an arbiter (I prefer 3 copies of my data, but your sla and budget probably differ).
19:07 JoeJulian I would enable sharding for VM images.
19:10 ekarlso JoeJulian: [root@ovhost01 ~]# gluster volume create foo replica 3 arbiter 1
19:11 ekarlso ah needs bricks passed in as well
19:17 ekarlso Jules-: sorry for all the q's but I havent used gluster before :p
19:17 ekarlso closest I've come is Panasas for HPC : p
19:22 ekarlso JoeJulian: gluster volume create foo replica 3 arbiter 1 transport tcp ovhost20:/gluster/brick1 ovhost21:/gluster/brick1 ovhost22:/gluster/brick1 ovhost23:/gluster/brick1 < still gives an error about bricks is not a multiple of replicas..
19:31 bartden joined #gluster
19:32 bartden Hi, is there a maximum amount of concurrent threads on clients that can read from gluster volume?
19:45 ekarlso JoeJulian: thnx for the help I got it working )
19:49 gem joined #gluster
19:50 jkroon joined #gluster
19:53 derjohn_mob joined #gluster
19:55 JoeJulian ekarlso: Excellent. And I had a very tasty chicken parm sandwich from a food truck. :D
19:56 baber joined #gluster
19:56 JoeJulian bartden: Not that I'm aware of. Probably some kernel limit if going through fuse.
20:04 ekarlso LOL :p
20:08 oajs joined #gluster
20:10 KoSoVaR JoeJulian I think I'm going raid6 backed dispersed.. with LSI controllers. pray.
20:26 JoeJulian 🤞
20:27 farhoriz_ joined #gluster
20:45 baber joined #gluster
20:54 farhorizon joined #gluster
20:54 ira joined #gluster
21:02 ingard__ hi. anyone mounting their volumes with 10+ ms latency?
21:03 ingard__ i'm getting shit throughput on my disperse volume when mounting it from 2nd DC
21:06 vbellur joined #gluster
21:07 vbellur joined #gluster
21:08 vbellur joined #gluster
21:08 vbellur joined #gluster
21:09 vbellur joined #gluster
21:10 vbellur joined #gluster
21:10 JoeJulian 10+ms? That's a lot of latency for a filesystem.
21:10 vbellur joined #gluster
21:33 vbellur joined #gluster
21:33 vbellur joined #gluster
21:34 vbellur joined #gluster
21:35 vbellur joined #gluster
21:36 vbellur joined #gluster
21:37 vbellur joined #gluster
21:38 farhorizon joined #gluster
21:39 vbellur1 joined #gluster
21:41 vbellur joined #gluster
21:42 vbellur1 joined #gluster
21:45 vbellur joined #gluster
21:47 vbellur joined #gluster
21:53 vbellur joined #gluster
21:54 ekarlso JoeJulian: what is the reason for 2 hosts seeing each other as disconnected ?
21:57 JoeJulian ekarlso: Usually it's firewall.
22:03 vbellur joined #gluster
22:05 farhorizon joined #gluster
22:15 vbellur joined #gluster
22:16 farhoriz_ joined #gluster
22:16 oajs joined #gluster
22:21 ekarlso https://pastebin.com/MTSuvYFS < JoeJulian
22:21 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
22:33 oajs joined #gluster
22:34 Wizek_ joined #gluster
22:45 farhorizon joined #gluster
23:07 vbellur joined #gluster
23:23 JoeJulian ~ports | ekarlso
23:23 glusterbot ekarlso: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
23:25 JoeJulian I have in front of me a 4k 3840x2160 monitor. Pastebin let's me have a window of 939x576 so they can paper the rest with useless information. Why is this popular?
23:26 shyam joined #gluster
23:27 vbellur joined #gluster
23:58 jkroon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary