Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-06-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 Alghost joined #gluster
00:58 Alghost joined #gluster
01:11 cliluw joined #gluster
01:25 Alghost joined #gluster
01:28 cliluw joined #gluster
01:33 shdeng joined #gluster
01:42 glisignoli joined #gluster
01:46 glisignoli Hello, is there a place that details all the gluster volume options?
01:46 glisignoli My google foo isn't turning up anything
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:21 Sense8 joined #gluster
02:23 skoduri joined #gluster
02:24 ankitr joined #gluster
02:29 Alghost joined #gluster
02:29 Alghost joined #gluster
02:35 kpease joined #gluster
02:48 ppai joined #gluster
02:49 kpease joined #gluster
03:37 riyas joined #gluster
03:38 Alghost joined #gluster
03:56 susant joined #gluster
03:57 susant left #gluster
04:04 aravindavk joined #gluster
04:04 vbellur joined #gluster
04:12 atinm joined #gluster
04:13 toloughl joined #gluster
04:19 ndarshan joined #gluster
04:20 ashiq joined #gluster
04:23 kramdoss_ joined #gluster
04:24 sona joined #gluster
04:29 buvanesh_kumar joined #gluster
04:32 vbellur joined #gluster
04:33 vbellur joined #gluster
04:34 vbellur joined #gluster
04:34 Karan joined #gluster
04:45 ankitr joined #gluster
04:50 Shu6h3ndu joined #gluster
04:52 Alghost joined #gluster
04:53 vbellur joined #gluster
04:53 kdhananjay joined #gluster
04:59 nbalacha joined #gluster
05:00 skumar joined #gluster
05:02 aravindavk joined #gluster
05:03 karthik_us joined #gluster
05:07 prasanth joined #gluster
05:12 kotreshhr joined #gluster
05:12 toloughl joined #gluster
05:21 Saravanakmr joined #gluster
05:21 rastar joined #gluster
05:24 kramdoss_ joined #gluster
05:33 gyadav joined #gluster
05:33 nbalacha joined #gluster
05:37 apandey joined #gluster
05:44 ankitr joined #gluster
05:49 hgowtham joined #gluster
05:49 Prasad joined #gluster
05:56 bkunal|afk joined #gluster
06:01 prasanth joined #gluster
06:04 bios_l_ joined #gluster
06:06 sahina joined #gluster
06:13 rafi joined #gluster
06:13 bios_l_ joined #gluster
06:20 telius joined #gluster
06:24 Alghost joined #gluster
06:25 [diablo] joined #gluster
06:26 mbukatov joined #gluster
06:32 XpineX joined #gluster
06:34 jtux joined #gluster
06:43 rafi joined #gluster
06:48 Alghost joined #gluster
06:48 Wizek_ joined #gluster
06:50 timg__ joined #gluster
06:51 timg__ joined #gluster
06:53 skumar joined #gluster
07:04 TBlaar joined #gluster
07:05 jkroon joined #gluster
07:06 zcourts joined #gluster
07:10 ivan_rossi joined #gluster
07:10 ivan_rossi left #gluster
07:42 jkroon joined #gluster
07:49 hybrid512 joined #gluster
07:51 msvbhat joined #gluster
07:51 zcourts joined #gluster
07:55 TBlaar2 joined #gluster
07:59 rafi1 joined #gluster
08:02 kshlm joined #gluster
08:02 jkroon joined #gluster
08:18 rafi joined #gluster
08:21 kdhananjay joined #gluster
08:40 hgowtham joined #gluster
08:51 atinm joined #gluster
08:54 Alghost_ joined #gluster
09:00 rafi1 joined #gluster
09:11 neferty glisignoli: since i was looking for the same thing the other day :) https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
09:11 glusterbot Title: Setting Up Volumes - Gluster Docs (at gluster.readthedocs.io)
09:13 atinm joined #gluster
09:38 kdhananjay joined #gluster
09:52 hgowtham joined #gluster
09:57 kramdoss_ joined #gluster
10:00 sahina joined #gluster
10:09 msvbhat joined #gluster
10:22 kramdoss_ joined #gluster
10:26 sanoj joined #gluster
10:40 rastar joined #gluster
10:41 bfoster joined #gluster
10:43 sahina joined #gluster
10:45 caitlinb_ joined #gluster
10:46 caitlinb_ Hey, I’m looking at running gluster in aws and I’m a bit unsure about how i can structure my ec2 nodes and ebs volumes. Can I use ephermeral ec2 instances but persistent EBS volumes? or would nodes joining an autoscaling group need to replicate data into their own mount point?
10:46 Nemen` joined #gluster
10:47 Nemen` Hi there!
10:48 jkroon joined #gluster
11:05 marbu joined #gluster
11:06 legreffier caitlinb_: it's not a good idea to do that in an aws env
11:07 legreffier + i think you can emulate gluster behavior with aws services pretty right.
11:24 vbellur joined #gluster
11:25 Seth_Karlo joined #gluster
11:25 vbellur joined #gluster
11:27 Seth_Karlo joined #gluster
11:42 Koma joined #gluster
11:43 sanoj joined #gluster
11:45 Koma left #gluster
11:53 rastar joined #gluster
11:59 mbukatov joined #gluster
12:03 caitlinb_ legreffier: you’re advocating not running gluster in aws at all?
12:05 cloph why would you? amazon's storage setup achieves similar things already
12:08 sona joined #gluster
12:09 itisravi joined #gluster
12:12 caitlinb_ we’re having real problems with EFS and the iops model
12:12 caitlinb_ many small files
12:12 Klas gluster is horrible with small files
12:13 ashiq joined #gluster
12:13 caitlinb_ i am happy that my due dilligence is finding these things out, thanks
12:13 chrisg Klas: do you have some articles on that? we're considering glusterfs for some things, specifically for filesystems that may contain many small xls documents or the likes for some analysts
12:14 chrisg caitlinb_: not glusterfs, but cephfs https://about.gitlab.com/2016/11/10/why-choose-bare-metal/
12:14 glusterbot Title: How We Knew It Was Time to Leave the Cloud | GitLab (at about.gitlab.com)
12:14 Klas the design, by it's very nature, is HORRIBLE for small files
12:14 chrisg Klas: the go off to many places to see what this inode really is?
12:14 chrisg (in my niave understanding of gluster)
12:14 Klas every access of a file, every server having a claim to the file
12:14 Klas must answer
12:14 vbellur joined #gluster
12:14 Klas so, replica*network latency
12:15 Klas for access, per file
12:15 chrisg yeah this is the thing we're coming up against
12:15 chrisg inside one dc that's fine, but we can't stretch gluster accross to even a geo close datacentre that we only have maybe 8-10ms latency between
12:15 vbellur joined #gluster
12:15 chrisg because gluster will go pop
12:15 bios_l_ joined #gluster
12:15 Klas I'm a novice myself, but it's REALLY not the place to look for fast access to many small files
12:16 Klas pretty much correct, yeah
12:16 vbellur joined #gluster
12:16 Klas gluster works well within the same datacentre, not so much between different ones
12:16 Klas except of course, georeplication
12:16 Klas but that creates read-only copies off-site
12:16 Klas as I understand it at least
12:17 chrisg i'd love multi datacentre writes but it's something we're struggling to find
12:17 chrisg yeah, that's redhats understanding too
12:17 chrisg i've been asking some guys there through our account manager
12:17 chrisg and it's the same asnwer
12:17 chrisg answer*
12:17 Klas torrent file system any good?
12:17 Klas haven't read up on it
12:17 Klas but that seems more or less designed for absurd distribution levels
12:17 chrisg I want something we can shout at redhat about ideally
12:18 Klas ah, so ceph or gluster then
12:18 chrisg yea
12:18 chrisg we're an estate of many thousands of systems, having the odd random open source unsupported thing is interesting, but not really viable when we have to make things supportable for ops teams etc
12:19 chrisg asking ops teams to do something special for many snowflakes in a large estate is a bit unfair
12:19 chrisg and also having that escalation route to vendor
12:20 Karan joined #gluster
12:24 skoduri joined #gluster
12:39 aravindavk joined #gluster
12:41 rwheeler joined #gluster
12:42 plarsen joined #gluster
12:56 shyam joined #gluster
12:56 caitlinb joined #gluster
12:56 WebertRLZ joined #gluster
12:58 Saravanakmr joined #gluster
13:00 ahino joined #gluster
13:02 vbellur joined #gluster
13:02 ahino1 joined #gluster
13:03 vbellur joined #gluster
13:03 vbellur joined #gluster
13:04 vbellur joined #gluster
13:06 marbu joined #gluster
13:07 sona joined #gluster
13:17 atinm joined #gluster
13:22 msvbhat joined #gluster
13:23 bios_l_ joined #gluster
13:24 kotreshhr joined #gluster
13:29 timg_____ joined #gluster
13:30 mbukatov joined #gluster
13:32 vbellur joined #gluster
13:35 Alghost joined #gluster
13:42 bios_l_ joined #gluster
13:59 buvanesh_kumar joined #gluster
14:06 buvanesh_kumar joined #gluster
14:10 sahina joined #gluster
14:14 prasanth joined #gluster
14:15 dgandhi joined #gluster
14:23 primehaxor joined #gluster
14:25 nbalacha joined #gluster
14:25 bowhunter joined #gluster
14:35 Teraii joined #gluster
14:35 kpease joined #gluster
14:41 sona joined #gluster
14:43 farhorizon joined #gluster
14:57 Seth_Karlo joined #gluster
14:58 wushudoin joined #gluster
14:59 wushudoin joined #gluster
15:06 farhorizon joined #gluster
15:08 timg__ joined #gluster
15:09 Seth_Karlo joined #gluster
15:14 msvbhat joined #gluster
15:14 hvisage joined #gluster
15:18 zcourts joined #gluster
15:22 timg_____ joined #gluster
15:23 rastar joined #gluster
15:29 ahino joined #gluster
15:29 farhoriz_ joined #gluster
15:36 Karan joined #gluster
15:45 Seth_Kar_ joined #gluster
15:47 sona joined #gluster
15:51 Shu6h3ndu joined #gluster
16:02 skoduri joined #gluster
16:14 ashiq joined #gluster
16:23 jbrooks joined #gluster
16:29 farhorizon joined #gluster
16:31 neferty i'm mounting gluster volumes on coreos/container linux, and my file atimes/mtimes are truncated to the second, could this be because of an old gluster version?
16:32 glisigno1i joined #gluster
16:34 neferty actually, yeah... my gluster client is 3.5.2, and my node/host is 3.11, could that be why my mtimes are truncated?
16:51 cholcombe neferty: i kinda doubt it but i'm not sure
16:52 cholcombe i was pretty sure that 3.4 had millisecond timestamps
16:53 neferty but the timestamps are only truncated on these old clients, and not on the up to date 3.11 ones :/
16:53 cholcombe that's interesting
16:53 cholcombe maybe there's an RPC mismatch
16:53 cholcombe neferty: anything interesting in the logs on the clients?
16:53 cholcombe or the server for that matter
16:54 neferty i haven't looked to be honest, and the client is going to be very tricky to inspect, since the client in this context is the control plane of kubernetes
16:55 neferty i'm compiling a custom hyperkube image at the moment so that i can update the client and see if it really is the client version mismatch
16:56 neferty cholcombe: if that doesn't pan out, what am i looking for in the logs?
16:57 cholcombe neferty: i'd think anything with a ' E ' or a ' W ' to start with
16:57 shyam joined #gluster
16:58 neferty the server log is practically empty, aside from the startup message, i might need to kick the debug level up a notch
16:59 neferty oh, it's not in the journal log but its own log
16:59 cholcombe :)
17:00 neferty well, i can see errors but nothing that looks like it could be related to this
17:01 cholcombe ok
17:02 cholcombe neferty: so when you stat the files in the container you only get seconds instead of milli's?
17:03 neferty https://gist.github.com/andor44/f40f189e3b442455413339d3eb2c35c5
17:03 glusterbot Title: gist:f40f189e3b442455413339d3eb2c35c5 · GitHub (at gist.github.com)
17:03 cholcombe hmm ok
17:04 cholcombe now that you've run stat on the client do you see anything in the client logs?
17:04 kraynor5b_ joined #gluster
17:04 cholcombe you'll probably have to up the log level to debug or trace
17:16 sona joined #gluster
17:21 farhorizon joined #gluster
17:22 farhorizon joined #gluster
17:23 JoeJulian Actually, neferty, it is because of the old version.
17:24 neferty hmm, is it?
17:24 ashiq_ joined #gluster
17:26 Seth_Karlo joined #gluster
17:26 _KaszpiR_ joined #gluster
17:28 neferty at the moment it doesn't seem that way
17:29 neferty now i am mounting with an up to date glusterfs it seems, and it's still truncated
17:29 Seth_Karlo joined #gluster
17:31 Vapez joined #gluster
17:31 Vapez joined #gluster
17:33 JoeJulian https://bugzilla.redhat.com/show_bug.cgi?id=1422074
17:33 glusterbot Bug 1422074: unspecified, unspecified, ---, bugs, CLOSED CURRENTRELEASE, GlusterFS truncates nanoseconds to microseconds when setting mtime
17:33 neferty oh wait, hold on... now `stat -c %Y foo` matches up
17:34 neferty JoeJulian: that's not the issue i was/am having, as for me it was truncating to second, not to microsecond
17:34 neferty but i did come across that bug
17:35 JoeJulian I do remember something about that from way back, but I'm not finding it in bugzilla.
17:35 neferty hmm, okay, hold on a sec, it might have been the version mismatch
17:36 neferty yes, it seems like it was the version mismatch, at least it is working now
17:36 gospod2 joined #gluster
17:39 JoeJulian ????
17:39 neferty either way, elasticsearch is no longer screaming at me that the mtime isn't what it was expecting it to be, and that's good enough for me :)
18:02 Karan joined #gluster
18:03 msvbhat joined #gluster
18:13 gospod2 joined #gluster
18:19 sona joined #gluster
18:28 gospod2 joined #gluster
18:30 Seth_Karlo joined #gluster
18:38 gospod2 joined #gluster
18:39 shyam joined #gluster
18:47 cliluw joined #gluster
18:51 rastar joined #gluster
18:51 cliluw joined #gluster
18:59 cliluw joined #gluster
19:18 paulds joined #gluster
19:19 paulds Hi all.  I have a question about cluster.op-version.
19:19 paulds On a server running 3.8.12, I get the following error:
19:20 paulds # gluster volume set all cluster.op-version 30804
19:20 paulds volume set: failed: Required op_version (30804) is not supported
19:20 paulds Question: why?  Shouldn't it support anything up to 30812?
19:21 paulds My clients only have 3.8.4, which is why I'm aiming for cluster.op-version 30804.
19:22 marlinc joined #gluster
19:40 DV joined #gluster
20:02 pioto joined #gluster
20:03 jkroon joined #gluster
20:05 JoeJulian paulds: The max op-version only changes if the rpc layer changes.
20:10 paulds JoeJulian: Ok, but the source implies that 30804 is a valid op-version.  See here: https://github.com/gluster/glusterfs/blob/release-3.9/libglusterfs/src/globals.h
20:10 glusterbot Title: glusterfs/globals.h at release-3.9 · gluster/glusterfs · GitHub (at github.com)
20:11 paulds Am I misunderstanding that?
20:12 timg__ joined #gluster
20:13 paulds server is currently running at just 30712.  so i'm wanting to update to the latest op-version the 3.8.4 clients can handle.  if it's not 30804, what would it be, and how would i determine that?
20:30 Seth_Karlo joined #gluster
20:43 timg__ joined #gluster
21:15 Seth_Karlo joined #gluster
21:16 Seth_Kar_ joined #gluster
21:31 Seth_Karlo joined #gluster
21:41 Acinonyx joined #gluster
22:15 gospod2 joined #gluster
22:31 gospod2 joined #gluster
22:49 gospod2 joined #gluster
23:00 gospod2 joined #gluster
23:03 Alghost joined #gluster
23:04 Alghost joined #gluster
23:05 eryc joined #gluster
23:05 eryc joined #gluster
23:11 Alghost_ joined #gluster
23:11 gospod2 joined #gluster
23:13 Alghost joined #gluster
23:19 bowhunter joined #gluster
23:45 zcourts joined #gluster
23:46 shyam joined #gluster
23:56 Alghost joined #gluster
23:59 timg__ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary