Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 cyberbootje1 joined #gluster
00:03 baber joined #gluster
00:21 baber joined #gluster
00:34 baber joined #gluster
00:50 BlackoutWNCT joined #gluster
00:51 baber joined #gluster
01:04 baber joined #gluster
01:13 baber joined #gluster
01:26 MrAbaddon joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:59 Gambit15 joined #gluster
02:02 gospod3 joined #gluster
02:27 BlackoutWNCT Hey Guys, I'm having some trouble with the glusterfs samba vfs module, and was wondering if someone could walk me through the output of the following log file.
02:27 BlackoutWNCT https://paste.ubuntu.com/25711392/
02:27 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
03:04 ronrib_ joined #gluster
03:11 skoduri joined #gluster
03:37 vbellur joined #gluster
03:37 vbellur joined #gluster
03:38 vbellur joined #gluster
03:44 nbalacha joined #gluster
03:54 psony joined #gluster
04:05 ramteid joined #gluster
04:08 kramdoss_ joined #gluster
04:13 msvbhat_ joined #gluster
04:13 msvbhat joined #gluster
04:19 poornima joined #gluster
04:21 atinm|mtg joined #gluster
04:23 Shu6h3ndu joined #gluster
04:25 vijay joined #gluster
04:27 rafi joined #gluster
04:28 skumar joined #gluster
04:42 ppai joined #gluster
04:47 Saravanakmr joined #gluster
04:56 sanoj joined #gluster
04:58 aravindavk joined #gluster
05:09 kdhananjay joined #gluster
05:09 karthik_us joined #gluster
05:11 Prasad joined #gluster
05:15 jiffin joined #gluster
05:15 xavih joined #gluster
05:22 buvanesh_kumar joined #gluster
05:23 buvanesh_kumar joined #gluster
05:26 kdhananjay joined #gluster
05:27 root_rtfm joined #gluster
05:28 omie888777 joined #gluster
05:31 root_rtfm hello
05:31 glusterbot root_rtfm: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer
05:33 root_rtfm With replicated volume only, on 3 nodes, it is possible to clone a disk  with rsync to create an other node, and limite synchronization between each node ?
05:33 root_rtfm thks
05:37 omie888777 joined #gluster
05:52 msvbhat joined #gluster
05:53 msvbhat_ joined #gluster
06:00 kotreshhr joined #gluster
06:06 jkroon joined #gluster
06:09 hgowtham joined #gluster
06:10 susant joined #gluster
06:15 xavih joined #gluster
06:17 armyriad joined #gluster
06:26 Prasad_ joined #gluster
06:27 mbukatov joined #gluster
06:28 jtux joined #gluster
06:45 rouven_ joined #gluster
06:59 rouven joined #gluster
07:01 xavih joined #gluster
07:11 major joined #gluster
07:11 rouven_ joined #gluster
07:14 rouven joined #gluster
07:34 [diablo] joined #gluster
07:36 rastar joined #gluster
07:37 sanoj joined #gluster
07:39 fsimonce joined #gluster
08:02 _KaszpiR_ joined #gluster
08:06 sanoj joined #gluster
08:09 kdhananjay joined #gluster
08:14 Prasad joined #gluster
08:17 nh2 joined #gluster
08:19 sahina joined #gluster
08:21 Klas I am looking for documentation on how to disable/remove georeplication in glusterfs, if the methods are different, it's 3.7.15 I'm running atm, anyone know where to look?
08:25 sahina ndevos, kkeithley - are you the right people to contact about glusterfs packaging in fedora?
08:29 Klas found it in http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/ under deleting
08:29 glusterbot Title: Geo Replication - Gluster Docs (at docs.gluster.org)
08:32 _KaszpiR_ joined #gluster
08:34 xavih joined #gluster
08:37 apandey joined #gluster
08:54 sahina joined #gluster
09:01 lefreut joined #gluster
09:01 lefreut hey guys
09:02 lefreut are replicated volume supported on low latency WAN? or are geo replication possible in multimaster mode?
09:03 lefreut you have to pay to see the article on redhat website ;(
09:07 jiffin1 joined #gluster
09:09 leaving joined #gluster
09:13 leaving mount gluster fs to local, ls the directory remain about 1min.
09:14 leaving did someone know the reason?thx
09:17 leaving joined #gluster
09:18 leaving anybody online?
09:18 lefreut leaving: yup but i know less than you on gluster :D
09:19 lefreut leaving: as a quick answer how many files in the directory and what is the latency between nodes? (just read that: https://joejulian.name/blog/glusterfs-replication-dos-and-donts/)
09:19 glusterbot Title: GlusterFS replication do's and don'ts (at joejulian.name)
09:20 leaving about 21 files
09:20 leaving very slow
09:21 lefreut so logally (as said in the article) your ls will last 21*2*latency
09:21 lefreut seems relatable?
09:21 lefreut 21*4*latency sorry
09:22 leaving sometimes, my program use read or write function, but the system call hang up, no return.
09:23 lefreut out of my knowledge sorry
09:23 leaving thx
09:23 kotreshhr joined #gluster
09:24 leaving switch to this channel #gluster-dev ,have a try.
09:26 leaving left #gluster
09:27 Wizek_ joined #gluster
09:36 [diablo] joined #gluster
09:40 Humble joined #gluster
09:43 jarbod joined #gluster
09:46 jiffin1 joined #gluster
09:47 vbellur joined #gluster
09:49 vbellur1 joined #gluster
09:52 atinm|mtg joined #gluster
09:55 msvbhat_ joined #gluster
09:55 msvbhat joined #gluster
09:55 rouven joined #gluster
10:01 sahina joined #gluster
10:02 kotreshhr joined #gluster
10:03 Klas hmm, I removed geo-replication from a volume, but a couple of options are still set
10:03 Klas geo-replication.ignore-pid-check: on
10:03 Klas geo-replication.indexing: on
10:03 Klas is it possible to remove these (not just set them to off)
10:04 Klas I've created a simple alarm based on if geo-replication output is present, it should warn if anything is wrong on that volume
10:05 atinm|mtg joined #gluster
10:06 jarbod joined #gluster
10:15 Klas gluster vol reset ${volname] geo-replication.ignore-pid-check force
10:15 Klas gluster vol reset ${volname] geo-replication.indexing force
10:15 Klas seems to work, any particular reason this needs the force?
10:24 skoduri joined #gluster
10:31 sahina ndevos, ping..do you know who handles fedora packaging for glusterfs?
10:31 ndevos sahina: that is mostly kkeithley, but there are several others that can do it too
10:32 ndevos sahina: the .spec in Fedora is based on the glusterfs.spec.in from the main glusterfs repository, so any changes need to be made there first
10:32 sahina ndevos, ok..sandro had a question if we plan to package gluster for f25/26 as well?
10:33 ndevos sahina: glusterfs is already part of Fedora, and has been for many years?
10:34 sahina ndevos, sorry..i should have been more specific :). 3.12 is available only on f27 and above?
10:34 sahina ndevos, will it be available in f25/26 as well?
10:35 ndevos sahina: different versions of Fedora have different versions of glusterfs, we dont do major verion upgrades within fedora releases
10:36 ndevos sahina: see https://apps.fedoraproject.org/packages/glusterfs/ for the current versions
10:36 glusterbot Title: Package glusterfs (at apps.fedoraproject.org)
10:37 ndevos sahina: there are packages on download.gluster.org as well, for different Fedora versions, see http://docs.gluster.org/en/latest/Install-Guide/Community_Packages/
10:37 glusterbot Title: Community Packages - Gluster Docs (at docs.gluster.org)
10:37 sahina ndevos, thanks for that
10:38 Wizek_ joined #gluster
10:39 sahina ndevos, so it should be available in f25 and f26 as per that
10:40 ndevos sahina: yes, just not part of the standard fedora repositories, those versions are always the latest at the time the fedora release is made
10:40 sahina ndevos, ok
11:30 kkeithley sahina, ndevos: also http://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
11:30 kramdoss_ joined #gluster
11:30 glusterbot Title: Community Packages - Gluster Docs (at gluster.readthedocs.io)
11:33 ndevos kkeithley: I prefer the docs.gluster.org link I passed ;-)
11:34 kkeithley Oh, I didn't see that. And I forgot we're moving off readthedocs
11:35 * kkeithley can't keep up with our ever changing documenation stategy
11:38 kkeithley documentation even
11:38 marin[m] hi, i want to create a striped replicated volume using 30 nodes in 2  racks
11:38 marin[m] and it is important to replicate data across the racks
11:38 marin[m] what's the order of the bricks?
11:38 marin[m] i will have stripe 15 replica 2
11:39 marin[m] in which order should i specify the machines
11:39 marin[m] first rack list, second rack list, or interlaced
11:39 marin[m] rack1-srv1, rack2-srv1 .... ?
11:40 marin[m] this is not clear from the documentation
11:43 rouven joined #gluster
11:44 Prasad_ joined #gluster
11:46 msvbhat joined #gluster
11:46 msvbhat_ joined #gluster
11:57 aravindavk joined #gluster
12:00 rouven joined #gluster
12:07 ndevos marin[m]: you really should not use stripe, it has been deprecated and will not receive (m)any updates
12:07 ndevos marin[m]: if you want to split large files into smaller ones, you should look into sharding instead
12:08 marin[m] yes, i need to store files larger then 1 single brick
12:08 marin[m] so distributed is not an option
12:08 ndevos then sharding is for you
12:08 marin[m] let get some info on that...
12:08 ndevos distribute+replicate with sharding enabled
12:09 lefreut is sharding only for "big files" use case or does it improve iops/bandwith?
12:09 marin[m] i don't care much for performance.. if it's not completely crap
12:09 marin[m] it's a volume used for backups
12:09 vbellur joined #gluster
12:09 marin[m] in a setup with lots of machines with relatively small disks
12:09 marin[m] i just need to pool together a number of machines and save large files on them
12:10 ndevos it can improve iops/bandwidth, it depends a little on the workload and all, distribution tends to be good so more storage servers handle the load
12:10 marin[m] these are the constraints of the setup, if it was my design i would simpli get 1-2 large disks machines ...
12:10 ndevos @sharding
12:10 glusterbot ndevos: for more details about sharding, see http://blog.gluster.org/2015/12/introducing-shard-translator
12:10 lefreut ndevos: thanks :)
12:11 marin[m] thanks, i'll read into it
12:12 marin[m] ok, so shard is a volume feature, not a .. type, like striped
12:13 marin[m] thanks for this, i really didn't come across this blog post, it would have cleared a lot of things :)
12:14 marin[m] any real issue with having the brick in a mountpoint directly, as opposed to a subdirectory?
12:15 marin[m] if you create a volume like this you get a warning and you have to force it
12:23 sac` joined #gluster
12:26 rouven joined #gluster
12:33 rouven joined #gluster
12:33 marbu joined #gluster
12:34 shyam joined #gluster
12:41 |R joined #gluster
12:45 Prasad joined #gluster
12:45 lefreut if i understand correctly geo-replication work only on one way master -> slave, therefore slave is kind of "read-only" (write don't get commited to the master). question is: have anyone seen geo-replication used for anything else than backup?
12:50 kramdoss_ joined #gluster
12:55 marin[m] any special options that i need to use when mounting a sharded volume?
13:10 Prasad joined #gluster
13:13 |R I didn't realize that a brick could only offer 1 view, no matter how many volumes you setup using that brick. So, for people doing containers, you have to create partitions in advance for how many different mount you'll need? That sounds ... heavy.
13:17 jtux joined #gluster
13:18 dxlsm Hey all.. looking for some assistance and/or confirmation about something we're seeing with our gluster cluster. We had eight nodes with 16 bricks 6TB each. That has been running fine for a year. We're running in straight replica-2. We added eight more nodes with 12 bricks 10TB each. The moment we added the first brick, performance on the cluster totally tanked. From what I read, a fix-layout (and, at some point, a file rebalance) would be needed, so I ki
13:19 kdhananjay joined #gluster
13:20 dxlsm To the questions: Is this expected? Is there a better strategy for adding bricks that doesn't involve an unusable cluster for days on end? If this is not expected, is there something I should be looking at to find a fix? Log files are pretty quiet.
13:24 skylar1 joined #gluster
13:24 sahina joined #gluster
13:26 msvbhat joined #gluster
13:26 msvbhat_ joined #gluster
13:32 lefreut dxlsm: i'm even noober than you are but in a previous discussion in this chan this article have been quoted: http://blog.gluster.org/introducing-shard-translator/
13:32 lefreut dxlsm: as it state "Adding servers can happen in any number (even one at a time) and DHT’s rebalance will spread out the “piece files” evenly." i guess you should try sharding?
13:33 lefreut unless you're on redhat and as far as i know sharding is only supported for ovirt
13:33 vavuthu_ joined #gluster
13:37 vijay joined #gluster
13:42 hmamtora joined #gluster
13:42 hmamtora_ joined #gluster
13:46 aravindavk joined #gluster
13:49 skumar joined #gluster
13:51 dxlsm lefreut: Thanks.. I'll take a look at that article. We're not sharding, so ???? I'll see what it has to say.
13:52 rouven joined #gluster
13:52 lefreut dxlsm: hopefully that a volume option you can activate.....i have no idea if it can have a negative impact when it start sharding tough, so don't test it in prod (or hope someone more experimented than i am answer)
13:55 rouven joined #gluster
13:57 dxlsm thanks..
14:01 msvbhat joined #gluster
14:01 msvbhat_ joined #gluster
14:07 rouven joined #gluster
14:09 jobewan joined #gluster
14:39 snehring joined #gluster
14:44 farhorizon joined #gluster
14:53 kotreshhr left #gluster
14:54 nbalacha joined #gluster
14:59 kpease joined #gluster
15:01 kpease_ joined #gluster
15:01 vijay joined #gluster
15:11 jefarr_ joined #gluster
15:16 major joined #gluster
15:17 kpease joined #gluster
15:17 ivan_rossi joined #gluster
15:22 arif-ali joined #gluster
15:23 ivan_rossi left #gluster
15:28 msvbhat joined #gluster
15:30 msvbhat_ joined #gluster
16:03 snehring joined #gluster
16:05 atinm joined #gluster
16:43 Bonaparte joined #gluster
16:44 shyam joined #gluster
16:44 Bonaparte Hello. I had a working setup of gluster replica bricks. Today, I noticed that the files are not syncing between bricks. I tried to heal and I get his message: Gathering list of healed entries on volume xxxx has been unsuccessful on bricks that are down. Please check if all brick processes are running.
16:45 Bonaparte How do I go about verifying that all brick processes are indeed running?
16:49 shyam joined #gluster
16:57 hmamtora_ gluster volume info
17:10 Shu6h3ndu joined #gluster
17:12 jkroon joined #gluster
17:37 farhorizon joined #gluster
18:04 rouven joined #gluster
18:12 msvbhat joined #gluster
18:12 msvbhat_ joined #gluster
18:13 Vapez joined #gluster
18:34 msvbhat joined #gluster
18:34 msvbhat_ joined #gluster
18:39 vbellur joined #gluster
18:41 skylar1 joined #gluster
19:00 msvbhat joined #gluster
19:00 msvbhat_ joined #gluster
19:05 vbellur1 joined #gluster
19:06 vbellur joined #gluster
19:14 Teraii joined #gluster
19:37 jiffin joined #gluster
20:27 shyam joined #gluster
20:34 dlambrig joined #gluster
20:39 dlambrig joined #gluster
20:39 cliluw joined #gluster
20:40 ThHirsch joined #gluster
20:46 mattmcc joined #gluster
20:52 jiffin joined #gluster
21:00 farhorizon joined #gluster
21:07 MrAbaddon joined #gluster
21:16 skylar1 joined #gluster
21:22 farhorizon joined #gluster
21:38 _KaszpiR_ joined #gluster
21:52 rouven joined #gluster
22:10 shyam joined #gluster
22:21 ic0n_ joined #gluster
22:23 farhorizon joined #gluster
22:38 shyam joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary