Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-12-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 JoeJulian Well, things are more like "guidelines" than "rules".
00:04 Leolo_2 yes.  I'm just wondering how badly I'll be bitten if I don't follow the guidelines
00:05 jri joined #gluster
00:06 Leolo_2 hmm.... I'm mounting with 'gl01-30:/gv00 /gv00 glusterfs backupvolfile-server=gl02-30 0 0'  if gl01-30 is down, this takes a while before glusterfs gives up and asks gl02-30
00:22 bturner joined #gluster
00:28 gyadav joined #gluster
00:29 shyam joined #gluster
00:33 ompragash joined #gluster
00:51 slacko_16322 joined #gluster
00:51 slacko_16322 besteasywork.com/jd.77
00:51 slacko_16322 besteasywork.com/jd.77
00:51 slacko_16322 besteasywork.com/jd.77
00:51 slacko_16322 besteasywork.com/jd.77
00:51 slacko_16322 besteasywork.com/jd.77
00:51 slacko_16322 besteasywork.com/jd.77
01:20 aravindavk joined #gluster
01:27 kramdoss_ joined #gluster
01:36 gospod2 joined #gluster
01:44 gospod3 joined #gluster
01:44 kenansulayman joined #gluster
01:45 nixpanic joined #gluster
01:45 nixpanic joined #gluster
01:50 susant joined #gluster
01:54 n-st joined #gluster
01:57 renout14 joined #gluster
01:58 ahino joined #gluster
02:09 Leolo_2 huh, gluster volume create will make the brick dir if it doesn't exist
02:23 MrAbaddon joined #gluster
02:30 bturner joined #gluster
02:41 gyadav joined #gluster
02:57 ilbot3 joined #gluster
02:57 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:02 gospod4 joined #gluster
03:04 rafi1 joined #gluster
03:10 nbalacha joined #gluster
03:12 susant joined #gluster
03:29 msvbhat joined #gluster
03:39 msvbhat joined #gluster
03:54 kramdoss_ joined #gluster
03:58 psony joined #gluster
03:59 ppai joined #gluster
03:59 ic0n joined #gluster
04:01 n-st joined #gluster
04:02 arpu joined #gluster
04:06 jri joined #gluster
04:07 n-st joined #gluster
04:09 n-st joined #gluster
04:09 itisravi joined #gluster
04:12 renout14 joined #gluster
04:34 n-st joined #gluster
04:37 atinm joined #gluster
04:39 susant joined #gluster
04:39 jiffin joined #gluster
04:42 msvbhat joined #gluster
04:43 renout14 joined #gluster
04:48 Shu6h3ndu__ joined #gluster
04:49 sunnyk joined #gluster
04:56 jiffin1 joined #gluster
05:07 sanoj joined #gluster
05:07 jiffin1 joined #gluster
05:14 sunnyk joined #gluster
05:17 skumar joined #gluster
05:19 vishnu_kunda joined #gluster
05:20 plarsen joined #gluster
05:22 hgowtham joined #gluster
05:23 vishnu_sampath joined #gluster
05:32 Prasad joined #gluster
05:33 karthik_us joined #gluster
05:36 armyriad joined #gluster
05:39 sanoj joined #gluster
05:40 jiffin joined #gluster
05:46 kotreshhr joined #gluster
05:54 skumar_ joined #gluster
05:59 skumar__ joined #gluster
06:03 gyadav_ joined #gluster
06:07 karthik_us joined #gluster
06:11 kramdoss_ joined #gluster
06:14 msvbhat joined #gluster
06:14 ompragash_ joined #gluster
06:17 ompragash joined #gluster
06:26 apandey joined #gluster
06:29 Saravanakmr joined #gluster
06:39 xavih joined #gluster
06:41 ompragash_ joined #gluster
06:47 susant joined #gluster
06:57 kdhananjay joined #gluster
07:12 siel joined #gluster
07:14 msvbhat joined #gluster
07:17 jtux joined #gluster
07:20 rastar joined #gluster
07:26 sanoj joined #gluster
07:27 hgowtham joined #gluster
07:45 msvbhat joined #gluster
07:46 ivan_rossi joined #gluster
07:56 vishnu_kunda joined #gluster
08:09 marc_888 joined #gluster
08:12 jri joined #gluster
08:18 fsimonce joined #gluster
08:24 Prasad_ joined #gluster
08:25 Teknologeek joined #gluster
08:25 Teknologeek Hello
08:25 glusterbot Teknologeek: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually
08:25 Teknologeek Morning everybody, i still have an issue with my glusterfs cluster
08:26 Teknologeek setup is distribute disperse 5 * ( 4 + 2 )
08:27 Teknologeek after server crash and reboot of all the nodes, i see only 1/6 of the storage size on the client mount point with df command
08:27 Teknologeek gluster volume status says all the bricks are up
08:28 Teknologeek this happens with all the volumes in the cluster
08:28 Teknologeek i really need help
08:29 Prasad__ joined #gluster
08:33 itisravi joined #gluster
08:36 ompragash__ joined #gluster
08:39 ompragash_ joined #gluster
08:45 ompragash joined #gluster
08:49 karthik_us joined #gluster
08:51 timmmey joined #gluster
09:04 timmmey joined #gluster
09:07 msvbhat joined #gluster
09:10 ThHirsch joined #gluster
09:12 ThHirsch joined #gluster
09:22 Prasad__ joined #gluster
09:25 karthik_us joined #gluster
09:26 kramdoss_ joined #gluster
09:32 sunnyk joined #gluster
09:32 ompragash_ joined #gluster
09:33 buvanesh_kumar joined #gluster
09:33 kdhananjay joined #gluster
09:33 Teknologeek please really need help :p
09:35 itisravi joined #gluster
09:37 kdhananjay joined #gluster
09:41 Leolo_2 joined #gluster
09:42 karthik_us joined #gluster
09:51 kusznir_ joined #gluster
10:00 MrAbaddon joined #gluster
10:01 rafi1 joined #gluster
10:09 hgowtham joined #gluster
10:18 Leolo_2 does inotify work on gluster?
10:21 kdhananjay joined #gluster
10:24 sunny joined #gluster
10:31 ompragash__ joined #gluster
10:33 Leolo_2 oooooooooooo it seems so !
10:34 ompragash__ joined #gluster
10:37 Teknologeek any glusterfs expert can help me with my issue please ?
10:37 Teknologeek need help to diagnose
10:52 MrAbaddon joined #gluster
10:59 jri joined #gluster
11:23 msvbhat joined #gluster
11:33 itisravi joined #gluster
11:36 Teknologeek hello
11:36 glusterbot Teknologeek: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually
11:38 Teknologeek when running gluster volume heal info on a disperse volume, i see some files in error, is it possible to heal them ?
11:44 Acinonyx joined #gluster
12:01 gyadav joined #gluster
12:01 gyadav_ joined #gluster
12:09 nbalacha joined #gluster
12:14 kettlewell joined #gluster
12:23 karthik_us joined #gluster
12:27 ompragash joined #gluster
12:33 jri joined #gluster
12:43 Saravanakmr joined #gluster
13:04 phlogistonjohn joined #gluster
13:06 jri joined #gluster
13:10 StKlas Teknologeek: sounds a bit like you are mounting against arbiter?
13:10 StKlas I know that I've seen strange results when doing that
13:14 Teknologeek no
13:14 Teknologeek it is erasure coded
13:19 karthik_us joined #gluster
13:24 rastar joined #gluster
13:28 guhcampos joined #gluster
14:00 kpease joined #gluster
14:01 kramdoss_ joined #gluster
14:01 phlogistonjohn joined #gluster
14:07 shyam joined #gluster
14:10 logan- joined #gluster
14:19 jbrooks joined #gluster
14:28 drymek joined #gluster
14:28 jkroon joined #gluster
14:28 Teknologeek Any help would be appreciated
14:31 Teknologeek Is it normal behaviour to report only size of the bricks with df in glusterfs ?
14:31 Teknologeek I don't understand which size should be displayed
14:42 pdrakewe_ joined #gluster
14:43 jstrunk joined #gluster
14:50 sunnyk joined #gluster
14:51 Saravanakmr joined #gluster
14:59 plarsen joined #gluster
15:05 plarsen joined #gluster
15:07 shellclear joined #gluster
15:08 bennyturns joined #gluster
15:12 nbalacha joined #gluster
15:23 susant joined #gluster
15:30 buvanesh_kumar joined #gluster
15:34 jbrooks joined #gluster
15:37 ndevos Teknologeek: I'd start by checking the size of each brick that is listed in 'gluster volume info <volume>', maybe something is incorrectly mounted?
15:48 ivan_rossi left #gluster
16:02 Teknologeek joined #gluster
16:02 mlg9000 joined #gluster
16:06 mlg9000 joined #gluster
16:13 Teknologeek ty ndevos
16:19 kotreshhr joined #gluster
16:19 kotreshhr left #gluster
16:22 kramdoss_ joined #gluster
16:28 jbrooks joined #gluster
16:30 Prasad__ joined #gluster
16:40 Teknologeek I already checked the size of the bricks
16:40 Teknologeek they all are ok
16:41 Teknologeek when I launch a du -sh in the mount point, real size is displayed
16:41 Teknologeek df still displays 1/6th of the size
16:42 Teknologeek 5 * ( 4 + 2 ) distribute disperse setup
16:44 jri joined #gluster
16:46 Teknologeek it is as if df only reports the size of 1 subvolume
16:47 Teknologeek or .. dunno
16:56 susant joined #gluster
17:05 Vapez joined #gluster
17:16 shellclear joined #gluster
17:16 jiffin joined #gluster
17:17 Teknologeek what is brick-fsid option for ?
17:17 Teknologeek i notice that it is 0 on all my servers except one
17:17 ic0n joined #gluster
17:24 jri joined #gluster
17:25 ompragash left #gluster
17:26 henrikJuul joined #gluster
17:28 shellclear joined #gluster
17:31 bennyturns joined #gluster
17:35 shellclear joined #gluster
17:41 shellclear joined #gluster
17:50 MrAbaddon joined #gluster
18:01 cloph Teknologeek: unfortunately no experience with dispersed volumes at all - but for distributed volumes files go "missing" if the bricks go down - but you already mentioned/made sure that all bricks are indeed up, so no other suggestions then..
18:02 drymek joined #gluster
18:02 Leolo_2 question : how does gluster react to different sized bricks in a replated volume?
18:02 Leolo_2 replicated
18:02 cloph fine with it - smallest brick determines how much data the volume can hold.
18:02 Leolo_2 will it report the free space as the smallest of the brick?
18:03 JoeJulian When one brick gets full, additional writes will write out to the not-full brick, but your file will not be in a healthy state.
18:03 Leolo_2 fair enough
18:03 JoeJulian I think it just randomly* reports the size of any one brick.
18:04 JoeJulian * not really random, but for all intents and purposes it's random enough.
18:04 Leolo_2 so I could hypothetically move a volume to larger disks just by replacing each brick, one at a time, waiting for heal
18:04 JoeJulian yes
18:05 Leolo_2 "randomly*" as in glusterfs could ask any brick, non-deterministicly
18:14 JoeJulian No, it's deterministic. See cluster.read-hash-mode
18:24 gyadav_ joined #gluster
18:24 gyadav joined #gluster
18:33 bennyturns joined #gluster
18:43 MrAbaddon joined #gluster
18:52 atrius joined #gluster
18:55 Asako joined #gluster
18:56 Asako Hello.  Are geo-replication volumes read-only?  The docs don't really say.
19:08 cloph only makes sense if they are read-only - but not enforced
19:11 Asako ok, thanks
19:20 cholcombe Posting this so anyone else doing directory cataloging can benefit: https://github.com/cholcombe973/gpfind.  This edges out find + xargs by about 10% consistently
19:20 cholcombe careful though cause it can really set your cluster on fire
19:20 cholcombe feel free to file bugs and request enhancements.
19:23 Asako nice
19:28 WebertRLZ joined #gluster
19:33 vbellur1 joined #gluster
19:35 shellclear joined #gluster
19:35 vbellur joined #gluster
19:36 vbellur joined #gluster
19:37 vbellur joined #gluster
19:42 skylar1 joined #gluster
19:51 jobewan joined #gluster
19:55 Teknologeek joined #gluster
19:57 drymek joined #gluster
19:58 drymek Hi guys. I have kubernetes + glusterfs + heketi. It works, but it uses XFS and our databases (ArangoDB) refuses to run on it. After reboot the file size missmatch. Is it possible to run heketi + glusterfs on ext4 ?
19:59 Asako gluster works with ext4.  Not familiar with heketi.
20:00 vbellur joined #gluster
20:00 Teknologeek still trying to understand why my storage does not report the full size with df
20:01 drymek I've found: https://github.com/chmod666org/scaleway-k8s it works like a charm (I'm pretty new to kubernetes). Do you know any resource which can help me to setup glusterfs + kubernetes on bare metals?
20:01 glusterbot Title: GitHub - chmod666org/scaleway-k8s: K8S deployment on scaleway servers (at github.com)
20:02 JoeJulian drymek: heketi doesn't care. Yes you can use ext4 but I doubt that'll make any difference.
20:02 JoeJulian Oh, wait... I'm thinking about that backwards.
20:03 vbellur1 joined #gluster
20:03 JoeJulian The question is about whether you can configure heketi to format ext4.
20:04 JoeJulian Not according to https://github.com/heketi/heketi/blob/255ff7a68de06d3016379817c213a9875c66107f/executors/sshexec/brick.go#L78
20:04 glusterbot Title: heketi/brick.go at 255ff7a68de06d3016379817c213a9875c66107f · heketi/heketi · GitHub (at github.com)
20:04 JoeJulian Wouldn't be hard to hack that to do ext4 though.
20:04 drymek actually to I need to run glusterfs with ext4
20:05 drymek heketi is not required for me
20:05 JoeJulian Sure, any posix filesystem that supports extended attributes can be used with gluster.
20:06 drymek but that chmod666org is a great tool for me, since I'm not familiar with kuberenetes at all :-)
20:07 drymek is there any ansible / puppet / script or whatever to provision glusterfs + kubernetes ?
20:08 Teknologeek openshift can do the job for you
20:08 Teknologeek can be deployed with openshift-ansible
20:10 jstrunk drymek: there's https://github.com/gluster/gluster-kubernetes
20:10 glusterbot Title: GitHub - gluster/gluster-kubernetes: GlusterFS Native Storage Service for Kubernetes (at github.com)
20:11 drymek jstrunk: it's based on heketi too.
20:12 drymek openshift is out of the scope - have to run it on bare metals (privacy terms & conditions)
20:13 Teknologeek ah sorry :)
20:13 misc openshift can be installed on baremetal
20:14 Teknologeek i am looking for some inputs concerning a failed gluster in production :'(
20:15 JoeJulian Teknologeek: Oh, you're back. Have that log I asked for yet?
20:15 jstrunk drymek: Do you want/need gluster containerized or are you going to run separate servers?
20:16 Teknologeek which log sorry ?
20:16 drymek jstrunk: currently I have gluster as a container
20:17 drymek kubernetes workers have additional 250GBssd and I run kubernetes on them + glusterfs
20:17 JoeJulian I asked you to mount the volume to a new directory, check df on that mount and if it's wrong to post the log to a ,,(paste) site for that mount.
20:17 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
20:17 Teknologeek Yes i checked
20:17 kale joined #gluster
20:17 Teknologeek df still reports wrong size even in a new mount point
20:17 Teknologeek i tried with nfs and fuse mounts
20:17 kale is there any specific reason for the warning when creating a volume on a mountpoint?
20:18 JoeJulian Please continue reading beyond the phrase "if it's wrong"
20:18 jstrunk well, you could kubectl exec in and manage your volumes manually.
20:18 JoeJulian kale: Usually that means the brick device you intended to have your volume on didn't mount.
20:19 drymek jstrunk: Can I? Hmm...
20:19 kale JoeJulian: mount shows it, df shows it
20:19 jstrunk perhps I should back up... What are you trying to do?
20:19 Teknologeek so you think it is just a mount issue ?
20:19 Teknologeek on the client ?
20:19 JoeJulian Oh, "on" a mountpoint. Sorry, misread.
20:20 JoeJulian Teknologeek: I don't know without looking at the log.
20:20 kale also, where in FHS would i place the gluster volumes?
20:20 JoeJulian kale: Arguably /srv /data
20:21 JoeJulian There's a lot of controversy as to what's "correct"
20:21 kale JoeJulian: hm.. never heard about /data, will have to read up on that. i use /srv for now, so that seems to be fine
20:22 Teknologeek well
20:22 Teknologeek kale: let me give you an answer
20:23 Teknologeek kale: if you mount the brick on the mount point and mount point is not mounted for some reason you ll write data to / partition
20:23 kale Teknologeek: ok, i can live with that. i'll use the mountpoint.
20:23 Teknologeek kale: data may then be replicated / healed or whatever and your whole volume will be corrupted :)
20:23 kale Teknologeek: that i cannot live with .-)
20:24 kale but doesn't the daemon consider the missing .gluster* directory to be a big nough problem to stop starting the volume?
20:27 JoeJulian It does now, yes. There's a lot of legacy behind the protections.
20:27 Teknologeek kale: I am not expert enough, but i remember reading about this topic some time ago. I think it is a good practice to use sub directory to avoid any issue. Anyways gluster refuses to create the volume on a mount point now, unless you force explicitely
20:28 Gambit15 joined #gluster
20:30 drymek Can I choose filesystem on volume level or it's server level ?
20:31 jstrunk it's recommended to have a fs per brick, so "volume level" should be fine
20:33 drymek any keyword to search for example?
20:33 JoeJulian My bricks are usually lvm logical volumes formatted xfs or btrfs depending on my use case mounted to /data/<volume_name>/<pv device>/
20:36 Leolo_2 cluster.read-hash-mode doesn't seem to be documented ...
20:36 jstrunk joined #gluster
20:36 JoeJulian gluster volume set help
20:38 drymek thanks
20:41 Leolo_2 joe - thanks man
20:41 drymek works for me too :D
20:42 drymek (help command)
20:42 Leolo_2 I've been looking for where all those options are documented
20:43 JoeJulian The cli's surprisingly well documented considering how hard it is to get devs to update any other documentation.
20:45 Leolo_2 probably a silly question - all the examples have gluster volumes via VMs, on virtual drives.  for testing, this is fine.  is this also how I should be doing it in production?  or is gluster volumes on bare metal acceptable?
20:51 Teknologeek bare metal is good :)
20:53 Teknologeek I think it was designed to be a bare metal scale out NAS before being used as a block device storage for VMs
20:54 drymek how to create bricks?
20:54 drymek heketi used to do that for me.
20:54 drymek how to do it manually?
20:58 Leolo_2 drymek - a brick is simply a subdirectory, ideally inside it's own fs.  any way you would create an fs (aka partion) will work.  parted, sfdisk, fdisk.  then LVM (if using).  then mkfs
20:58 Leolo_2 partion => partition
21:00 drymek ok, so it's more heketi related. Currently I have many bricks similar to:
21:00 drymek . /dev/mapper/vg_05ade3f0a7ec528cdcce6f03be189ad8-brick_b0a22e3a9239eb5d3af662b62e91d413 on /var/lib/heketi/mounts/vg_05ade3f0a7ec528cdcce6f03be189ad8/brick_b0a22e3a9239eb5d3af662b62e91d413 type xfs (rw,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
21:00 Leolo_2 good god
21:00 Leolo_2 yeah, those use LVM
21:02 Leolo_2 http://www.techoism.com/create-new-partition-using-lvm-linux/ fdisk+LVM+mkfs.  they use ext3.  you want xfs or ext4
21:02 glusterbot Title: How to Create New Partition using LVM in Linux - Techoism.com (at www.techoism.com)
21:02 drymek ext4
21:03 drymek Ok I think I got it.
21:03 Leolo_2 https://syedali.net/2014/02/24/adding-a-new-lvm-partition-with-gnu-parted/ parted+LVM but doesn't talk about mkfs
21:03 glusterbot Title: Adding a new LVM partition with GNU parted | Syed Ali (at syedali.net)
21:04 Leolo_2 wait, yes it does, but not in a <code> block
21:04 Leolo_2 am I alone in feeling that vg_ and lv_ prefixes are a bad idea?
21:05 drymek that's autoprefix
21:05 drymek not my choice for sure :-)
21:05 Leolo_2 I mean /dev/mapper/something-otherthing is still clearly an LVM partition, no?
21:06 Leolo_2 drymek - not saying it is.  those examples I posted also use vg_ and lv_.  CentOS uses vg_$(hostname) and lv_root if you do a default install.
21:07 drymek I'm not sysadmin, nor devops. Even with vg_ it sounds for me like a random string ;-)
21:09 plarsen joined #gluster
21:11 Leolo_2 well 05ade3f0a7ec528cdcce6f03be189ad8 is a random string :-)
21:11 Leolo_2 but you can use things like /dev/mapper/SCOTT-root
21:13 drymek I will, but first I have to confirm that I will be able to use manual volume with kubernetes
21:15 drymek anyway that sounds like a great plan to xmas :-)
21:16 drymek Thanks guys!
21:36 malevolent joined #gluster
21:45 jri joined #gluster
22:04 WebertRLZ joined #gluster
22:32 timmmey joined #gluster
23:13 shellclear joined #gluster
23:45 jri joined #gluster
23:54 ThHirsch joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary