Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 shyam joined #gluster
00:13 arpu joined #gluster
00:33 victori joined #gluster
00:51 plarsen joined #gluster
01:14 masuberu joined #gluster
01:41 pioto joined #gluster
01:49 ilbot3 joined #gluster
01:49 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 masber joined #gluster
02:06 mk-fg joined #gluster
02:06 mk-fg joined #gluster
02:13 Saravanakmr joined #gluster
02:14 gyadav joined #gluster
02:47 sanoj joined #gluster
03:43 Gambit15 joined #gluster
03:43 itisravi joined #gluster
03:43 Gambit15_ joined #gluster
03:43 riyas joined #gluster
03:45 psony joined #gluster
03:50 prasanth joined #gluster
03:52 mbukatov joined #gluster
03:52 ppai joined #gluster
03:58 winrhelx joined #gluster
04:05 atinm joined #gluster
04:11 karthik_us joined #gluster
04:19 Shu6h3ndu joined #gluster
04:20 sahina joined #gluster
04:28 kramdoss_ joined #gluster
04:32 Gambit15 joined #gluster
04:32 Gambit15_ joined #gluster
04:34 dominicpg joined #gluster
04:35 Alghost joined #gluster
04:36 Saravanakmr joined #gluster
04:39 ndarshan joined #gluster
04:40 kotreshhr joined #gluster
04:41 gyadav joined #gluster
04:44 Shu6h3ndu joined #gluster
04:44 buvanesh_kumar joined #gluster
04:55 skumar joined #gluster
04:58 Shu6h3ndu_ joined #gluster
05:01 Gambit15 joined #gluster
05:01 Gambit15_ joined #gluster
05:06 gyadav_ joined #gluster
05:07 ndarshan joined #gluster
05:16 susant joined #gluster
05:27 Saravanakmr joined #gluster
05:28 amarts joined #gluster
05:29 DV joined #gluster
05:29 skoduri joined #gluster
05:29 nbalacha joined #gluster
05:31 DV joined #gluster
05:35 prasanth joined #gluster
05:36 jiffin joined #gluster
05:42 gyadav__ joined #gluster
05:48 apandey joined #gluster
05:49 _ndevos joined #gluster
05:49 _ndevos joined #gluster
05:58 Humble joined #gluster
06:00 ankitr joined #gluster
06:03 Prasad joined #gluster
06:05 amarts joined #gluster
06:08 Karan joined #gluster
06:12 kdhananjay joined #gluster
06:14 rastar joined #gluster
06:15 hgowtham joined #gluster
06:16 sona joined #gluster
06:16 Shu6h3ndu joined #gluster
06:18 msvbhat joined #gluster
06:20 Karan joined #gluster
06:23 rafi joined #gluster
06:26 winrhelx joined #gluster
06:44 atinm joined #gluster
06:48 Acinonyx joined #gluster
06:49 skoduri joined #gluster
06:50 kramdoss_ joined #gluster
06:59 TBlaar joined #gluster
07:16 ivan_rossi joined #gluster
07:17 p7mo joined #gluster
07:17 kpease_ joined #gluster
07:18 mb_ joined #gluster
07:19 JGS joined #gluster
07:23 fsimonce joined #gluster
07:31 apandey_ joined #gluster
07:48 ndarshan joined #gluster
07:55 apandey__ joined #gluster
07:56 MadPsy joined #gluster
07:56 MadPsy joined #gluster
08:00 skoduri joined #gluster
08:06 kramdoss_ joined #gluster
08:07 Saravanakmr_ joined #gluster
08:11 ashiq joined #gluster
08:11 ahino joined #gluster
08:14 Shu6h3ndu joined #gluster
08:30 ahino joined #gluster
08:32 kramdoss_ joined #gluster
08:33 kpease joined #gluster
08:42 sona joined #gluster
08:47 hgowtham joined #gluster
08:52 ndarshan joined #gluster
08:55 atinm joined #gluster
09:05 amarts joined #gluster
09:06 msvbhat joined #gluster
09:13 Wizek_ joined #gluster
09:14 jkroon joined #gluster
09:14 rafi joined #gluster
09:24 msvbhat joined #gluster
09:26 MrAbaddon joined #gluster
09:40 susant joined #gluster
09:43 amarts joined #gluster
09:44 ndarshan joined #gluster
09:51 askz Hi
09:51 glusterbot askz: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answ
09:51 askz @paste
09:51 glusterbot askz: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
09:56 legreffier joined #gluster
09:56 Wizek__ joined #gluster
09:57 amarts joined #gluster
10:06 DV joined #gluster
10:33 Robin_ joined #gluster
10:33 Robin_ Hello
10:33 glusterbot Robin_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
10:40 nbalacha joined #gluster
10:42 buvanesh_kumar joined #gluster
11:12 Wizek__ joined #gluster
11:15 aronnax joined #gluster
11:27 cloph_away joined #gluster
11:33 cloph_away joined #gluster
11:38 pioto joined #gluster
11:39 amarts joined #gluster
11:40 baber joined #gluster
11:43 ndarshan joined #gluster
11:50 skumar joined #gluster
11:50 kramdoss_ joined #gluster
11:56 Shu6h3ndu joined #gluster
12:07 atinm joined #gluster
12:07 nxx joined #gluster
12:12 nxx Hei! Am I missing something on authentication for gluster volume access? If I want to restrict access I must use TLS?
12:16 Wizek_ joined #gluster
12:17 aronnax joined #gluster
12:18 kramdoss_ joined #gluster
12:18 nxx If a client is able to connect to a gluster server and has default read access to e.g. server/gluseter/foo (ACL: # owner: root, other::r-x) a root user on the client is able to write
12:19 leni1 joined #gluster
12:22 Hasitha joined #gluster
12:24 Hasitha Hi Guys, Is there a script or any way to resolve spilt brain automatically in GlusterFS 3.7.11
12:28 ndarshan joined #gluster
12:34 shyam joined #gluster
12:40 BlackoutWNCT1 joined #gluster
12:40 vbellur joined #gluster
12:40 cliluw joined #gluster
12:40 icey_ joined #gluster
12:41 DV__ joined #gluster
12:41 snehring_ joined #gluster
12:41 zerick joined #gluster
12:42 vbellur joined #gluster
12:42 atrius joined #gluster
12:46 decayofmind joined #gluster
12:47 Urania joined #gluster
12:50 jbrooks joined #gluster
12:53 skylar joined #gluster
12:57 ahino1 joined #gluster
12:57 shyam joined #gluster
12:58 vbellur joined #gluster
12:58 vbellur joined #gluster
12:59 skumar joined #gluster
12:59 vbellur joined #gluster
13:00 vbellur joined #gluster
13:00 vbellur joined #gluster
13:01 kotreshhr left #gluster
13:01 vbellur joined #gluster
13:01 vbellur joined #gluster
13:02 shyam left #gluster
13:02 vbellur joined #gluster
13:06 msvbhat joined #gluster
13:24 marlinc_ joined #gluster
13:26 siel joined #gluster
13:26 siel joined #gluster
13:27 Vaelatern joined #gluster
13:28 msvbhat joined #gluster
13:33 Hamburglr joined #gluster
13:37 ic0n joined #gluster
13:56 legreffier joined #gluster
13:56 Hamburglr joined #gluster
14:07 ahino joined #gluster
14:13 nbalacha joined #gluster
14:16 nick_g nbalacha: Hi ;) do you have any idea why a rebalance (on a distributed volume) says "in progress" under status (in the output of gluster volume rebalance VOLUME-NAME status) but in the same there is NO "Estimated time left for rebalance to complete" in the output of the status command?
14:17 nick_g * but in the same time
14:18 loadtheacc joined #gluster
14:18 gyadav__ joined #gluster
14:20 baber joined #gluster
14:21 pioto joined #gluster
14:26 msvbhat joined #gluster
14:27 ahino joined #gluster
14:42 farhorizon joined #gluster
14:45 winrhelx joined #gluster
14:53 kpease joined #gluster
14:59 cholcombe joined #gluster
15:00 nbalacha joined #gluster
15:00 nxx left #gluster
15:19 mlhess joined #gluster
15:28 mb_ joined #gluster
15:28 primehaxor joined #gluster
15:37 baber joined #gluster
15:37 Hamburglr joined #gluster
15:54 juhaj joined #gluster
15:56 juhaj Hi folks. I've been asked to provide a cheaper alternative to a 300 TB infiniband lustre setup using some existing storage elements we have… My thoughts were immediately with glusterfs.
15:59 juhaj We already have a system we could use with 14*4TB discs (raid6, so just 48 TB raw) and I was thinking of adding to this similar-size "modules" but my question is: how does glusterfs deal with bricks of different sizes? I would be striping across bricks but not replicate as this is just a cheap data dump target for post-processing results and not intended to be highly available nor long-term storage
16:01 gyadav__ joined #gluster
16:16 jstrunk joined #gluster
16:18 primehaxor joined #gluster
16:25 JoeJulian juhaj: different sizes... not all that efficiently. It'll work but they'll fill at the same rate. Once full, I'm not sure what distribute will do, to be honest.
16:25 JoeJulian juhaj: btw, don't use stripe. It's deprecated. Use distribute.
16:26 JoeJulian nick_g: no clue. I would probably do a state dump and see if there are any clues there.
16:26 JoeJulian Hasitha: There are built-in settings for that. See 'gluster volume set help'
16:28 baber joined #gluster
16:30 juhaj JoeJulian: Ok. So better make bricks same size then. That's unfortunate as it means repurposing existing hw is not going to be useful
16:30 JoeJulian Meh, you can always break them into partitions of the same size and just have multiple bricks on one disk.
16:31 juhaj Hmm.. otoh, 6*8 TB raid*5* bricks would give the same size as the to-be-repurposed 14*4TB raid6 does
16:31 JoeJulian Welcome to the outside of the box. It's roomier here. ;)
16:31 juhaj That's a weird size brick in what comes to # discs, and raid level, but is there anything else wrong with it?
16:31 JoeJulian Nope
16:32 juhaj Hmm... that partitioning idea is good as well.
16:33 juhaj So buy a fricking 96-disc monster enclosure, fill with 8TB discs in 8-disc raid6-blocks or something
16:34 JoeJulian Then watch in horror as your sas expander starts to fail and your 96 disks start dropping off one by one from the spof.
16:34 JoeJulian Not that that's ever happened to me.
16:35 * JoeJulian looks innocent.
16:36 farhorizon joined #gluster
16:37 juhaj Yes, that is always a possibility, but fortunately this is "expendable" storage and second if I ran 12 separate raid arrays, they would not mind each other's failures and unless the controller/expander started to corrupt data, it would simply be a matter of readding discs (been there, done that)
16:46 shaunm joined #gluster
16:53 farhorizon joined #gluster
16:54 msvbhat joined #gluster
17:00 farhorizon joined #gluster
17:00 ivan_rossi left #gluster
17:07 baber joined #gluster
17:24 purpleidea joined #gluster
17:24 purpleidea joined #gluster
17:30 ankitr joined #gluster
17:32 jstrunk joined #gluster
17:36 dgandhi joined #gluster
17:44 ankitr joined #gluster
17:45 skoduri joined #gluster
17:50 major bare in mind that even a robust file system (zfs/btrfs/xfs) can have a hell of a time if using partition tables instead of whole devices and the partition table becomes corrupt :(
17:51 major just had an associate run into that one last week :P
17:52 sona joined #gluster
18:02 farhorizon joined #gluster
18:15 Shu6h3ndu joined #gluster
18:27 baber joined #gluster
18:49 ahino joined #gluster
19:17 fassl joined #gluster
19:19 fassl_ joined #gluster
19:20 fassl_ hello, why is creating a volume on a root filesystem discouraged?
19:26 ic0n joined #gluster
19:29 jiffin joined #gluster
19:29 farhorizon joined #gluster
19:32 lazyy joined #gluster
19:34 lazyy Hi. Whenever I try to install Gluster 3.6 from here (https://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.9/Debian/9/apt/dists/stretch/main/binary-amd64/ ) - the installed package says I actually have 3.8! Any ideas?
19:34 glusterbot Title: Index of /pub/gluster/glusterfs/3.6/3.6.9/Debian/9/apt/dists/stretch/main/binary-amd64 (at download.gluster.org)
19:36 ic0n joined #gluster
19:53 baber joined #gluster
20:07 kharloss joined #gluster
20:08 vbellur1 joined #gluster
20:09 lazyy I half answered my own question. I only pegged the version of glusterfs-client, and not glusterfs-common . Now to fight other dependency issues :)
20:10 vbellur joined #gluster
20:11 vbellur joined #gluster
20:12 vbellur joined #gluster
20:13 vbellur joined #gluster
20:17 plarsen joined #gluster
20:22 jiffin joined #gluster
20:23 jiffin joined #gluster
20:29 kharloss joined #gluster
20:32 Guest83 joined #gluster
20:33 Guest83 Hi guys, I have a setup running 3.6.5 and a Distributed volume. Looks like since few week the distribution stoped and all files are placed on the same brick. What could I check?
20:59 jstrunk joined #gluster
21:00 farhorizon joined #gluster
21:12 shyam joined #gluster
21:18 kpease joined #gluster
21:27 kpease joined #gluster
21:36 kharloss_ joined #gluster
21:52 MrAbaddon joined #gluster
22:03 kharloss joined #gluster
22:19 Wizek_ joined #gluster
22:32 primehaxor joined #gluster
22:35 midacts joined #gluster
22:35 Gambit15 joined #gluster
22:38 atrius joined #gluster
22:42 raginbajin joined #gluster
23:01 jstrunk joined #gluster
23:07 Wizek_ joined #gluster
23:13 Alghost joined #gluster
23:15 kharloss joined #gluster
23:37 shyam joined #gluster
23:58 Alghost joined #gluster
23:59 vbellur joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary