Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 bbooth joined #gluster
00:02 shutupsq_ joined #gluster
00:03 shutupsquare joined #gluster
00:33 Klas joined #gluster
00:39 shutupsq_ joined #gluster
00:51 shutupsquare joined #gluster
00:51 shutups__ joined #gluster
00:52 shutup___ joined #gluster
00:53 DV__ joined #gluster
01:20 jdossey joined #gluster
01:51 nh2_ joined #gluster
01:53 arpu joined #gluster
01:56 victori joined #gluster
02:20 flomko joined #gluster
02:20 l2___ joined #gluster
02:33 derjohn_mobi joined #gluster
02:34 cacasmacas joined #gluster
02:37 BlackoutWNCT1 joined #gluster
02:42 daMaestro joined #gluster
02:46 victori joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:02 sage joined #gluster
03:32 BlackoutWNCT1 joined #gluster
03:49 riyas joined #gluster
03:59 mb_ joined #gluster
04:10 victori joined #gluster
04:24 nirokato JoeJulian: I just wanted to thank you again for the other day. You're a phenomenal person for supporting this chan as well as you do and you should not go under appreciated. Thank you.
04:25 JoeJulian Thanks, nirokato!
04:27 Lee1092 joined #gluster
04:28 nirokato You're welcome, you deserve it at more. I plan to read all of the documentation for GlusterFS and become an expert, or at the least a very competent enthusiast.
04:28 nirokato s/at/and/
04:28 glusterbot What nirokato meant to say was: You're welcome, you deserve it and more. I plan to read all of the documentation for GlusterFS and become an expert, or at the least a very competent enthusiast.
04:41 bwerthmann joined #gluster
04:51 msvbhat joined #gluster
04:59 plarsen joined #gluster
05:00 nirokato Does anyone know where to send correction suggestions for Gluster documentation? https://gluster.readthedocs.io/en/latest/Install-Guide/Overview/ "file sizes at of least 16KB" should be "files sizes of at least 16K"
05:00 glusterbot Title: Overview - Gluster Docs (at gluster.readthedocs.io)
05:00 victori joined #gluster
05:07 shyam joined #gluster
05:14 gyadav joined #gluster
05:16 Vaelatern joined #gluster
05:35 msvbhat joined #gluster
05:39 skoduri joined #gluster
05:39 victori joined #gluster
05:42 riyas joined #gluster
05:47 victori joined #gluster
05:51 jtux joined #gluster
06:03 MikeLupe joined #gluster
06:04 the-me joined #gluster
06:06 victori joined #gluster
06:07 msvbhat joined #gluster
06:12 nthomas_ joined #gluster
06:32 nbalacha joined #gluster
06:33 victori joined #gluster
06:40 msvbhat joined #gluster
06:44 victori joined #gluster
06:47 sbulage joined #gluster
06:48 [diablo] joined #gluster
07:17 jkroon joined #gluster
07:18 susant joined #gluster
07:26 mhulsman joined #gluster
07:28 mhulsman joined #gluster
07:33 mhulsman joined #gluster
07:40 shutupsquare joined #gluster
07:46 buvanesh_kumar joined #gluster
07:47 rwheeler joined #gluster
07:55 tdasilva joined #gluster
07:56 Guest89004 joined #gluster
08:01 ivan_rossi joined #gluster
08:07 TvL2386 joined #gluster
08:12 victori joined #gluster
08:20 susant joined #gluster
08:26 alezzandro joined #gluster
08:29 tdasilva joined #gluster
08:31 jri joined #gluster
08:32 jri joined #gluster
08:33 susant left #gluster
08:34 victori joined #gluster
08:44 fsimonce joined #gluster
08:46 nthomas_ joined #gluster
08:55 mbukatov joined #gluster
08:57 ahino joined #gluster
09:04 armyriad joined #gluster
09:05 msvbhat joined #gluster
09:06 victori joined #gluster
09:13 Marbug joined #gluster
09:15 flying joined #gluster
09:15 mhulsman joined #gluster
09:19 Ashutto joined #gluster
09:19 ccowley joined #gluster
09:19 susant joined #gluster
09:19 susant left #gluster
09:22 victori joined #gluster
09:22 pulli joined #gluster
09:29 victori joined #gluster
09:34 Ashutto Hello, i'm having a strange behavior with gluster. I created 2 different setups: (1) 3 nodes, 3 replica gluster 3.8.8 and (2) 5 nodes 2 replica 1 arbiter distributed setup using same hardware. It seems that the distributed performs way worse that the replica. Is there a repository of commonly used settings to optimize the distributed setup? thanks in advance
09:44 mhulsman joined #gluster
09:45 shutupsquare joined #gluster
09:48 rjoseph joined #gluster
09:49 victori joined #gluster
10:03 victori joined #gluster
10:10 victori joined #gluster
10:13 NuxRo Hi guys, we're planning a new gluster deployment. Would 3.9 be a safe choice? We're planning to get on to the 3.10 long term release when that happens.
10:14 derjohn_mob joined #gluster
10:15 ndevos NuxRo: yes, as long as you are aware of the short life of 3.9, you're good
10:20 NuxRo ndevos: yes, aware, but as we plan to jump on 3.10 when available, I'd imagine that'd be less of an effort from 3.9 rather than 3.8, am I right?
10:20 ndevos NuxRo: the difference between moving from 3.8 to 3.10 or from 3.9 to 3.10 should be pretty much the same, I'm not aware of any special steps for either
10:25 ccowley Hi all, I was wondering about the granularity of tiering. Is it just at a file level? Or can it be a sub-file. Basically, we use our cluster for storing VM images
10:26 NuxRo ndevos: thanks for that. One more for you: do you know if current nfs/samba gluster services are capable of reporting the size to the clients as being actually whatever the volume quota is? We run old 3.4 and the integrated nfs server has this feature which has been very handy (clients can see how much quota left they have with a simple "df -h")
10:29 ndevos NuxRo: I think that can be done with some quota related option, but I cant remember the name of the option - check with 'gluster volume get <volume> all' on recent versions
10:29 NuxRo ndevos: roger, thanks!
10:30 ndevos ccowley: yes, it can do that, if you have enabled sharding for the volume (and created the VM images after that)
10:31 ndevos ccowley: not sure how effective it is, maybe someone on the gluster-users list has experience with it
10:31 ccowley ndevos: ah well, the sharding may or may not be something I knew existed :-)
10:31 ndevos :)
10:32 ccowley so I presume with (pretty much the defaults) and ~50 VM images then if I were to enable tiering in Gluster, it would try and put entire disk images on the SSDs
10:32 ndevos yes, that is correct
10:34 shutupsq_ joined #gluster
10:34 ndevos and migrating large files between hot/cold tiers will consume time+bandwidth, maybe the migration logic even prevents moving the files all together if the're in active use
10:34 victori joined #gluster
10:46 buvanesh_kumar joined #gluster
10:51 ccowley ndevos: Is it possible to activate sharding then re-balance everything so my VM images get broken apart (bear in mind I am on 3 nodes, with replica 3)
10:55 ndevos ccowley: no, I do not think so, sharding needs to be enabled before the creation of the VM image, you would need to copy the image and the new image will be sharded (I assume)
10:58 musa22 joined #gluster
11:01 TBlaar joined #gluster
11:04 victori joined #gluster
11:05 susant joined #gluster
11:06 ccowley ndevos: oh well, LVM-cache it is then
11:06 ndevos ccowley: yes, that is a good middle ground if you have SSDs on all servers hosting bricks
11:08 Seth_Karlo joined #gluster
11:09 Seth_Karlo joined #gluster
11:10 ccowley ndevos: that is the plan. The bricks are all on LVM already too as I had this in mind when I told $boss that a single 7.2k disk per host would be insufficient :-)
11:16 msvbhat joined #gluster
11:22 Humble joined #gluster
11:28 victori joined #gluster
11:56 jvandewege joined #gluster
12:01 kettlewell joined #gluster
12:06 victori joined #gluster
12:08 mhulsman1 joined #gluster
12:11 jtux joined #gluster
12:33 mhulsman joined #gluster
12:34 nishanth joined #gluster
12:38 musa22 joined #gluster
12:38 mhulsman1 joined #gluster
12:41 fcoelho joined #gluster
12:42 victori joined #gluster
12:51 mhulsman joined #gluster
13:03 mhulsman joined #gluster
13:04 mhulsman joined #gluster
13:07 decay joined #gluster
13:10 ashka joined #gluster
13:10 ashka joined #gluster
13:18 coredumb joined #gluster
13:29 rwheeler joined #gluster
13:30 jri joined #gluster
13:37 unclemarc joined #gluster
13:39 gyadav joined #gluster
13:39 mhulsman1 joined #gluster
13:46 shyam joined #gluster
13:46 mhulsman joined #gluster
13:47 flyingX joined #gluster
14:02 msvbhat joined #gluster
14:04 ira joined #gluster
14:09 cloph joined #gluster
14:10 mhulsman joined #gluster
14:12 jri joined #gluster
14:19 msvbhat joined #gluster
14:23 plarsen joined #gluster
14:26 shaunm joined #gluster
14:28 FatboySlim joined #gluster
14:29 nh2_ joined #gluster
14:33 B21956 joined #gluster
14:40 skylar joined #gluster
14:51 skoduri joined #gluster
15:03 jdossey joined #gluster
15:28 jarbod joined #gluster
15:32 alvinstarr joined #gluster
15:32 pulli joined #gluster
15:40 squizzi joined #gluster
15:41 nh2_ joined #gluster
15:43 shaunm joined #gluster
15:46 victori joined #gluster
15:57 abyss^ joined #gluster
16:00 skoduri joined #gluster
16:05 derjohn_mob joined #gluster
16:14 susant left #gluster
16:14 farhorizon joined #gluster
16:15 farhorizon joined #gluster
16:16 TFJensen How can I inspect this: "pvs
16:16 TFJensen WARNING: /dev/gluster/data: Thin's thin-pool needs inspection.
16:16 TFJensen WARNING: /dev/gluster/export: Thin's thin-pool needs inspection.
16:16 TFJensen WARNING: /dev/gluster/iso: Thin's thin-pool needs inspection.
16:16 TFJensen PV         VG      Fmt  Attr PSize   PFree
16:16 victori joined #gluster
16:19 nh2_ joined #gluster
16:26 bowhunter joined #gluster
16:32 ashiq joined #gluster
16:45 victori joined #gluster
17:00 JoeJulian from the lvm2 source: if (status->read_only || status->out_of_data_space) { log_warn("WARNING: %s: Thin's thin-pool needs inspection." ...
17:01 JoeJulian So that would imply that those pools are either read_only or out of data space.
17:01 timotheus1_ joined #gluster
17:04 jdossey joined #gluster
17:06 victori joined #gluster
17:20 msvbhat joined #gluster
17:24 bowhunter joined #gluster
17:26 TFJensen So that is most likely the reason my gluster volumes are out of sync.
17:26 JoeJulian Seems probable.
17:26 TFJensen How can I fix that?
17:27 JoeJulian I've never used thin lvm
17:28 TFJensen Can I delete those lvms and recreate them without beeing thin
17:28 TFJensen I could make a backup of whats on the volumes
17:32 Seth_Kar_ joined #gluster
17:38 cacasmacas joined #gluster
17:58 rafi1 joined #gluster
17:58 farhoriz_ joined #gluster
17:59 kotreshhr joined #gluster
18:02 jri_ joined #gluster
18:02 derjohn_mob joined #gluster
18:07 jdossey joined #gluster
18:41 jkroon joined #gluster
18:42 derjohn_mob joined #gluster
18:43 mhulsman joined #gluster
18:47 shutupsquare joined #gluster
18:50 ksandha_ joined #gluster
18:54 msvbhat joined #gluster
19:01 Philambdo joined #gluster
19:16 NuxRo JoeJulian: what are other reasons gluster shouldnt be used for SaaS? re https://joejulian.name/blog/one-more-reason-that-glusterfs-should-not-be-used-as-a-saas-offering/
19:16 glusterbot Title: One more reason that GlusterFS should not be used as a SaaS offering (at joejulian.name)
19:19 JoeJulian Off the top of my head: no at-rest encryption, no per-volume authentication, services running as root, no uid/gid mapping
19:19 glusterbot JoeJulian: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
19:19 JoeJulian bite me glusterbot. :P
19:19 jiffin joined #gluster
19:20 raghu joined #gluster
19:20 NuxRo roger, thanks
19:20 derjohn_mob joined #gluster
19:21 kotreshhr left #gluster
19:46 NuxRo I'm coming from old 3.4 version, is 3.8 not exporting nfs by default any more?
19:47 Acinonyx joined #gluster
19:52 derjohn_mob joined #gluster
19:53 NuxRo rpcinfo on the server shows nothing on 2049, nfs.disable on the vol is off, start force didnt do anything, no nfs procs visible
19:53 NuxRo firewall disabled
19:54 jwd joined #gluster
19:55 kpease joined #gluster
19:57 PatNarciso I'm pricing out commodity hardware this afternoon.  Just curious, is anyone running a gluster on USB3 storage?
20:00 bowhunter joined #gluster
20:21 jwd joined #gluster
20:36 susant joined #gluster
20:40 bowhunter joined #gluster
20:45 nnnnick joined #gluster
20:46 nnnnick Alright #gluster. Riddle me this... I have a 2 node cluster distributed-replicated. I took one node down for service and it became disconnected from the pool. I currently give 0 fucks about retaining data, there was nothing important there anyway. All I care about is getting fucking gluster to actually work again. I can do literally
20:47 nnnnick nothing*
20:47 nnnnick can't delete volumes, can't delete bricks, can't detach the dead server from the pool, fucking nothing. it's bullshit. i thought gluster was a lot more user friendly than this.
20:59 jiffin joined #gluster
20:59 nnnnick yay irc is as fucking useless as gluster docs
20:59 nnnnick jesus fucking christ
21:02 JoeJulian Wow, someone has a problem with patience.
21:05 vbellur :)]
21:08 JoeJulian @later tell nnnnick Patience is a virtue. If you had still been around after I'd returned from lunch, I would have helped you. If you'd kept up the bad attitude, I would have stopped and gone back to my day job.
21:08 glusterbot JoeJulian: The operation succeeded.
21:19 raghu joined #gluster
21:39 ahino joined #gluster
22:06 shyam joined #gluster
22:14 derjohn_mob joined #gluster
22:22 farhorizon joined #gluster
22:24 musa22 joined #gluster
22:35 jbrooks joined #gluster
22:39 niknakpaddywak joined #gluster
22:53 raghu joined #gluster
23:01 TFJensen joined #gluster
23:14 jdossey joined #gluster
23:26 jbrooks joined #gluster
23:32 buvanesh_kumar joined #gluster
23:37 pulli joined #gluster
23:55 nh2_ hi, does Gluster provide a feature that allows me to ensure that replicas are distributed into different geographical failure locations?
23:55 nh2_ e.g. if I have an AWS cluster with 6 machines across 2 availability zones, and configured replica-3, it could happen that all 3 replicas land in the same availability zone
23:56 nh2_ and then when the AZ goes down, my storage is gone
23:56 JoeJulian You would want to configure that using geo-replication, which is unidirectional. No reasonably responsive bidirectional replication exists for high latency.
23:57 nh2_ JoeJulian: availability zones have low latency (e.g. 0.3 ms) -- they are data centers geographically close to each other, but with completely different power/internet supply
23:58 nh2_ JoeJulian: and you _need_ to use more than 1 AZ if you want any kind of HA guarantee, thus the gluster volume _needs_ to have its replicas across AZs, and I cannot use geo-replication
23:58 jbrooks joined #gluster
23:58 JoeJulian If that's sufficient for your use case, you would normally assign bricks to satisfy your replication needs anyway. ,,(brick-order)
23:58 glusterbot I do not know about 'brick-order', but I do know about these similar topics: 'brick order'
23:58 JoeJulian @brick order
23:58 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
23:59 nh2_ JoeJulian: ah, so the order of bricks matters? That is great to know
23:59 JoeJulian +1
23:59 nh2_ JoeJulian: on which page can I read more about this? I assume that if glusterbot knows about it, it's documented?
23:59 JoeJulian I'm sure it is, but glusterbot is just loaded with faq's from here.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary