Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 msvbhat joined #gluster
00:42 kramdoss_ joined #gluster
00:43 caitnop joined #gluster
01:11 gyadav joined #gluster
01:19 shyam joined #gluster
01:20 alejojo joined #gluster
01:20 bmurt joined #gluster
01:24 alekun joined #gluster
01:28 derjohn_mobi joined #gluster
01:42 gyadav joined #gluster
01:50 ilbot3 joined #gluster
01:50 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 shdeng joined #gluster
01:58 nh2 joined #gluster
02:06 shyam left #gluster
02:38 atinmu joined #gluster
03:16 gem joined #gluster
03:39 nbalacha_ joined #gluster
03:42 riyas joined #gluster
03:43 nbalacha joined #gluster
03:48 gem joined #gluster
03:54 k0nsl joined #gluster
03:54 k0nsl joined #gluster
03:59 gyadav joined #gluster
04:03 ankitr joined #gluster
04:14 kramdoss_ joined #gluster
04:33 ppai joined #gluster
04:36 skumar joined #gluster
04:36 poornima joined #gluster
04:36 sanoj joined #gluster
04:44 al joined #gluster
04:44 msvbhat joined #gluster
04:55 buvanesh_kumar joined #gluster
05:01 ankitr joined #gluster
05:05 itisravi joined #gluster
05:08 Prasad joined #gluster
05:13 Shu6h3ndu joined #gluster
05:13 msvbhat joined #gluster
05:19 ahino joined #gluster
05:26 rafi1 joined #gluster
05:33 sona joined #gluster
05:33 ndarshan joined #gluster
05:40 karthik_us joined #gluster
05:48 Prasad_ joined #gluster
05:55 mbukatov joined #gluster
06:02 gyadav_ joined #gluster
06:06 rafi1 joined #gluster
06:07 msvbhat joined #gluster
06:08 gyadav__ joined #gluster
06:15 Karan joined #gluster
06:19 rafi joined #gluster
06:21 amarts joined #gluster
06:24 ksandha_ joined #gluster
06:34 TBlaar joined #gluster
06:35 kdhananjay joined #gluster
06:36 jiffin joined #gluster
06:52 Prasad__ joined #gluster
06:53 Karan joined #gluster
06:53 karthik_us joined #gluster
06:58 ivan_rossi joined #gluster
07:03 jkroon joined #gluster
07:24 [diablo] joined #gluster
07:29 fsimonce joined #gluster
07:30 gem joined #gluster
07:46 poornima_ joined #gluster
07:55 apandey joined #gluster
08:00 ayaz joined #gluster
08:13 flying joined #gluster
08:13 Abazigal joined #gluster
08:19 Abazigal Hi guys ! I'm trying to create a volume across 4 servers, that have 4 bricks each; as I want the data still readable during a server maintenance/failure, I want replica 2; but when I create my volume, gluster tells me that having multiple bricks on the same server is bad
08:20 Abazigal so i'm guessing that if I do force creation, data can be replicated on 2 bricks hosted on the same server ?
08:21 amarts joined #gluster
08:21 Abazigal so ... what can I do to achieve my goal ? Is it somehow possible to build a volume on top of existing volume ?
08:22 Abazigal (so that I can create a Distrib volume for each server, then a Distributed Replicated volume on top of these ?)
08:35 Abazigal ok, my bad; apparently the underlying organization of Distributed Replicated is deduced from the order of the brick we give on the CLI
08:35 derjohn_mobi joined #gluster
08:49 Prasad_ joined #gluster
08:52 Prasad joined #gluster
08:55 cloph Abazigal: it is bad in the way that you will have to take care of proper placement yourself. replicating to 2 bricks on the same server doesn't make sense though, at least not with your goal of having replica 2 and being able to turn off one server for maintenance.
08:58 Abazigal yes, I agree; now that I know the order of parameters is important, I managed to do what I want without any warning from gluster
09:00 kramdoss_ joined #gluster
09:12 atinmu joined #gluster
09:14 sona joined #gluster
09:20 kramdoss_ joined #gluster
09:40 Wizek_ joined #gluster
09:44 kramdoss_ joined #gluster
09:58 poornima_ joined #gluster
10:04 msvbhat joined #gluster
10:18 jiffin joined #gluster
10:20 jiffin joined #gluster
10:21 Karan joined #gluster
10:26 ppai joined #gluster
10:26 Wizek_ joined #gluster
10:38 atinmu joined #gluster
10:55 kramdoss_ joined #gluster
11:10 telius joined #gluster
11:12 vinurs joined #gluster
11:14 vinurs joined #gluster
11:23 kramdoss_ joined #gluster
11:35 chatter29 joined #gluster
11:36 chatter29 hey guys
11:36 chatter29 allah is doing
11:36 chatter29 sun is not doing allah is doing
11:36 chatter29 to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
11:38 Prasad_ joined #gluster
11:46 itisravi_ joined #gluster
11:54 poornima_ joined #gluster
12:08 gem joined #gluster
12:09 skumar joined #gluster
12:10 nh2 joined #gluster
12:13 shyam joined #gluster
12:38 apandey_ joined #gluster
12:43 nbalacha joined #gluster
12:43 baber joined #gluster
12:46 msvbhat joined #gluster
12:56 Karan joined #gluster
13:07 poornima_ joined #gluster
13:20 nbalacha joined #gluster
13:30 shyam joined #gluster
13:31 gyadav__ joined #gluster
13:35 skylar1 joined #gluster
13:35 Wizek_ joined #gluster
13:38 jiffin joined #gluster
13:39 derjohn_mobi joined #gluster
13:42 msvbhat joined #gluster
13:47 kdhananjay joined #gluster
13:50 nbalacha joined #gluster
13:55 [diablo] joined #gluster
13:58 ppai joined #gluster
14:03 _KaszpiR_ are there any plans in gluster so that it is topology aware, like elasticsearch, where it does not place chunks on given nodes?
14:04 _KaszpiR_ like rack1, rack2 or as in AWS availability zones etc
14:06 msvbhat joined #gluster
14:13 ccha3 about profiling, what is FXATTROP ?
14:15 atinmu joined #gluster
14:28 nbalacha joined #gluster
14:50 farhorizon joined #gluster
14:57 plarsen joined #gluster
15:05 nbalacha joined #gluster
15:19 wushudoin joined #gluster
15:36 jkroon joined #gluster
15:42 msvbhat joined #gluster
15:59 Wizek_ joined #gluster
16:00 kramdoss_ joined #gluster
16:01 jbrooks joined #gluster
16:08 bmurt joined #gluster
16:13 gyadav__ joined #gluster
16:15 gem joined #gluster
16:28 riyas joined #gluster
16:32 Karan joined #gluster
16:38 btspce joined #gluster
16:41 farhorizon joined #gluster
16:46 Shu6h3ndu joined #gluster
16:50 jbrooks joined #gluster
17:16 atinmu joined #gluster
17:17 jkroon joined #gluster
17:25 baber joined #gluster
17:30 gem joined #gluster
17:31 farhoriz_ joined #gluster
17:37 riyas joined #gluster
17:49 akshay joined #gluster
17:50 akshay WORM mode is slowing things down even further
17:50 akshay Write Once Read Many was exactly what i was looking for read heavy small file workload
17:51 akshay Will WORM mode help speed up things?
18:07 samikshan joined #gluster
18:09 jbrooks joined #gluster
18:26 baber joined #gluster
18:32 baber joined #gluster
18:38 bartden joined #gluster
18:39 bartden hi, when a client mounts a distributed volume and uses node A for mount endpoint, when a file is located on node B (member of distributed volume) will the client get the file via node A from B or will A just direct him to B?
18:56 jiffin1 joined #gluster
19:08 gyadav__ joined #gluster
19:13 gnulnx Could use a little help with compiling gluster on freebsd.  I'm getting: 'Makefile:90: *** missing separator. Stop.' when I run 'gmake'
19:15 jiffin joined #gluster
19:18 gnulnx Same with 'make'
19:37 jkroon joined #gluster
19:55 shyam joined #gluster
20:17 janlam7 joined #gluster
20:20 k0nsl joined #gluster
20:20 k0nsl joined #gluster
20:36 farhorizon joined #gluster
20:52 nh2 can gluster distribute single folders across multiple bricks?
20:57 cloph no - it is volumes, not for some folders and not for others.
20:58 cloph if you mean whether it can split a huge set of files across multiple bricks: then yes, but then again that is not grouped by folder.
20:59 olisch1 joined #gluster
21:00 nh2 cloph: yes, that's what I mean. But it seems to fail for me. I made a 2x2 distribute-replicated volume, which each brick having 1 GB, so I have 2 GB total in my mount. I then tried to place 15 100MB-files into it, hoping that they would be distributed. But it failed with "No space left on device"
21:00 nh2 `df` shows that the mount still has 782M free space. But the one brick became full, and so it failed, and I don't understand why
21:01 cloph where it ends up depends on a hash of the pathname, so might not be uniform
21:02 cloph (for such a small sample at least)
21:03 nh2 cloph: it wrote all 10 out of 10 files onto the first replica set
21:03 cloph that certainly is unexpected, does it report all bricks to be up?
21:06 cloph @dht
21:06 glusterbot cloph: I do not know about 'dht', but I do know about these similar topics: 'dd'
21:06 nh2 cloph: yes, `gluster vol status` seems all normal
21:06 kkeithley short, similarly named files tend to hash to the same value and end up on the same brick.
21:06 cloph https://joejulian.name/blog/dht-misses-are-expensive/ has an explanation on how the dht stuff works
21:06 glusterbot Title: DHT misses are expensive (at joejulian.name)
21:08 nh2 cloph kkeithley: I now tried with 150 10-MB files. Here, it didn't fail with out-of-space, and succeeded, but it still wrote the first 97 files on the first brick
21:12 mallorn Do you have cluster.min-free-disk set on that volume?
21:15 cholcombe joined #gluster
21:25 nh2 mallorn: no
21:28 nh2 in https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes it says:
21:28 nh2 "New directories created after expanding or shrinking of the volume will be evenly distributed automatically."
21:28 nh2 this sounds like there is some per-directory logic
21:28 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.io)
21:37 Jacob843 joined #gluster
21:39 olisch joined #gluster
21:42 derjohn_mobi joined #gluster
22:03 nh2 cloph kkeithley mallorn: OK, a rebalance to have fixed it. It seems that after a volume remove/add, a rebalance is necessary so that single folder contents are distributed again. Is that possible?
22:07 Karan joined #gluster
22:10 farhoriz_ joined #gluster
22:12 shyam joined #gluster
22:27 nh2 joined #gluster
22:40 olisch joined #gluster
23:13 shyam joined #gluster
23:25 cloph you mean after brick remove/add: yes, that's expected
23:46 plarsen joined #gluster
23:50 major joined #gluster
23:55 msvbhat joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary