Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 shyam joined #gluster
00:30 Guest84 joined #gluster
00:44 PotatoGim joined #gluster
00:47 Vaelatern joined #gluster
00:56 repnzscasb joined #gluster
01:00 shdeng joined #gluster
01:23 minimicro joined #gluster
01:24 minimicro I am in the process of deploying gluster on a 16 node gluster that has 1GB Ethernet
01:24 minimicro I had set up a second network using the second NIC card on each machine
01:24 minimicro and so I have a 192.168.10.* which is shared across all of my machines
01:25 minimicro and was trying to configure gluster, at least internally, to use the 192.168.50.* network
01:25 minimicro and am running into walls---
01:25 glusterbot minimicro: walls-'s karma is now -1
01:25 minimicro sorry running into issues on setting up the proper network/hostname info so it works
01:28 minimicro i am not sure if and how modifying/updating the /etc/hosts file in this case--- and how to tell gluster which network it should "probe" on
01:28 glusterbot minimicro: case-'s karma is now -1
01:30 aj__ joined #gluster
01:30 minimicro any guides on configuring gluster on multiple networks any one knows of?
01:31 dnunez joined #gluster
01:37 Legacy_ joined #gluster
01:37 Legacy_ Hello, I am trying to configure 3x node glusterfs with nfs-ganesha. I am trying to follow http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/
01:37 glusterbot Title: Linux scale out NFSv4 using NFS-Ganesha and GlusterFS — one step at a time | Gluster Community Website (at blog.gluster.org)
01:38 Legacy_ But I am having issue with shared_storage
01:38 Legacy_ I never see the folder shared_storage getting created
01:38 Legacy_ [root@GNode0 ~]# ls /var/run/gluster 8fab5d870263b35f262e348d66bf6aae.socket  snaps [root@GNode0 ~]#
01:38 minimicro joined #gluster
01:38 Legacy_ Even after running  gluster volume set all cluster.enable-shared-storage enable
01:38 minimicro ack! my machine crashed
01:38 minimicro did someone just send me a quote?
01:38 Legacy_ gluster peer are up & connected
01:39 minimicro multiple network connections?
01:39 Legacy_ Any idea anyone
01:39 minimicro not sure how many people are actively on
01:41 Legacy_ anyone any help here ?
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 ahino joined #gluster
01:58 Lee1092 joined #gluster
02:23 minimicro joined #gluster
02:23 minimicro Can anyone let me know what the expected write performance for gluster should be on a 1GB network?
02:23 minimicro at least a ballpark
02:24 minimicro im getting relatively low write performance (50 mb/s)
02:24 minimicro this is on a setup with 8 machines, 2 bricks per machine
02:35 Guest84 joined #gluster
02:35 julim joined #gluster
03:01 Gambit15 joined #gluster
03:14 ahino joined #gluster
03:15 Gnomethrower joined #gluster
03:18 magrawal joined #gluster
03:23 wadeholler joined #gluster
03:34 kshlm joined #gluster
03:37 luis_silva joined #gluster
03:38 luis_silva1 joined #gluster
03:42 Gnomethrower joined #gluster
03:42 itisravi joined #gluster
03:46 atinm joined #gluster
03:47 ahino joined #gluster
04:12 arcolife joined #gluster
04:22 rafi joined #gluster
04:23 aspandey joined #gluster
04:38 Muthu joined #gluster
04:43 kdhananjay joined #gluster
04:46 poornimag joined #gluster
04:54 sanoj joined #gluster
05:01 jiffin joined #gluster
05:01 skoduri joined #gluster
05:02 hgowtham joined #gluster
05:03 karthik_ joined #gluster
05:08 harish_ joined #gluster
05:10 hchiramm joined #gluster
05:10 RameshN joined #gluster
05:15 itisravi joined #gluster
05:19 gem joined #gluster
05:21 prasanth joined #gluster
05:23 ppai joined #gluster
05:24 karnan joined #gluster
05:25 msvbhat joined #gluster
05:27 atalur joined #gluster
05:31 nbalacha joined #gluster
05:33 jiffin1 joined #gluster
05:34 ankitraj joined #gluster
05:34 Rick_ joined #gluster
05:34 kovshenin joined #gluster
05:36 rafi1 joined #gluster
05:38 karthik_ joined #gluster
05:38 ndarshan joined #gluster
05:47 skoduri joined #gluster
05:49 sanoj joined #gluster
05:52 aravindavk joined #gluster
05:52 aspandey joined #gluster
05:54 rafi joined #gluster
05:55 Bhaskarakiran joined #gluster
05:57 ramky joined #gluster
06:01 satya4ever joined #gluster
06:02 kotreshhr joined #gluster
06:17 javi404 joined #gluster
06:17 [diablo] joined #gluster
06:17 Alghost joined #gluster
06:18 pur joined #gluster
06:22 rastar joined #gluster
06:22 rastar joined #gluster
06:24 mchangir joined #gluster
06:26 jtux joined #gluster
06:26 poornimag joined #gluster
06:28 jiffin joined #gluster
06:41 hchiramm joined #gluster
06:41 skoduri joined #gluster
06:50 arcolife joined #gluster
06:51 devyani7_ joined #gluster
06:51 aj__ joined #gluster
06:52 devyani7_ joined #gluster
06:54 wushudoin joined #gluster
07:01 lezo joined #gluster
07:03 nehar joined #gluster
07:03 Philambdo joined #gluster
07:04 anil_ joined #gluster
07:05 msvbhat joined #gluster
07:08 masuberu joined #gluster
07:19 ramky joined #gluster
07:29 Alghost joined #gluster
07:31 fsimonce joined #gluster
07:43 nehar joined #gluster
07:50 prasanth joined #gluster
07:51 rafi1 joined #gluster
07:53 aspandey_ joined #gluster
07:57 Wizek joined #gluster
08:03 hgowtham joined #gluster
08:05 mbukatov joined #gluster
08:05 prasanth joined #gluster
08:07 kotreshhr joined #gluster
08:16 aj__ joined #gluster
08:19 Slashman joined #gluster
08:20 ira joined #gluster
08:21 deniszh joined #gluster
08:22 pur joined #gluster
08:26 Wizek joined #gluster
08:27 Siavash joined #gluster
08:32 atalur_ joined #gluster
08:33 kdhananjay joined #gluster
08:39 pur joined #gluster
08:41 hackman joined #gluster
08:42 anil_ joined #gluster
08:43 karthik_ joined #gluster
08:44 deniszh1 joined #gluster
08:45 aj__ joined #gluster
08:47 harish_ joined #gluster
08:47 aspandey joined #gluster
08:49 cloph joined #gluster
08:51 sanoj joined #gluster
08:55 cloph Hi - question about geo-replication and the changelog processing.. It seems that changelogs pile up at a greater rate than they are processed on a volume with lots of changes, but the geo-replication link or the slave-io don't seem to be limiting part.
08:56 siada joined #gluster
08:56 Siavash joined #gluster
08:56 cloph it transfers one changelog. Processes it on the slave, then sits idle, waiting for the next...
08:56 siada I've got a gluster volume that's not in use but I can't remove it because "Another transaction could be in progress", is there anyway to force remove? I'm just trying to reset my gluster volumes  back to default and start over
08:57 cloph siada: how long id you wait?
08:58 siada well the server has just freshly rebooted, so in theory gluster shouldn't be doing anything as the volume isnt mounted?
08:58 siada i'm 95% sure the volume is corrupt as I did some bad stuff I shouldn't have done (hence why I'm resetting it)
08:59 atalur_ joined #gluster
09:00 legreffier joined #gluster
09:00 jiffin siada: some of the glusterd took a lock on that gluster volume, only one thing which come into my mind is restartd glusterd daemon on all nodes
09:07 siada jiffin: how do I restart glusterd? neither /etc/init.d/glusterd or /etc/init.d/glusterfsd exist
09:07 kshlm siada, Which distribution are you using?
09:07 post-factum siada: systemctl restart glusterd, maybe
09:07 siada I've tried service glusterfs-server stop / service glusterfs-server start -- however still getting "Another transaction could be in progress"
09:08 siada kshlm: ubuntu 14.04
09:08 post-factum oh
09:08 post-factum what glusterfs version?
09:08 kshlm siada, Did you restart on all machines.
09:08 siada kshlm: yes, post-factum: 3.4.2
09:08 post-factum ohhh
09:09 post-factum kshlm: how should we prevent users from using outdated versions?
09:09 kshlm siada, Could you try a newer version please? 3.4.2 is really old.
09:09 kshlm post-factum, As long as they're being provided by the default repositories of a distribution it's gonna be really hard.
09:10 siada it's the highest version available by default on 14.04 :(
09:10 kshlm I don't think we'll be able to get Ubuntu to remove the old version from their repos.
09:10 kshlm siada, Use the official glusterfs ppa https://launchpad.net/~gluster
09:10 glusterbot Title: Gluster in Launchpad (at launchpad.net)
09:10 kshlm In any case since you're stuck and want to restart,
09:11 kshlm You could just nuke /var/lib/glusterd/ to clear out all configs.
09:11 kshlm Also delete the brick paths.
09:11 kshlm And after that install a newer version and try.
09:13 Mmike joined #gluster
09:13 gem joined #gluster
09:14 deniszh joined #gluster
09:15 siada Updating to 3.8.1 seems to have partly fixed my situation, should be able to get to the end on my own now :D
09:15 cloph siada: just don't create a striped volume
09:16 siada the reason I ended up in this mess was because I did the one thing that everyone seems to tell you not to do ... edit the volume directly instead of mouting it *facepalm*
09:24 siada how do I find out what type of split-brain I've got?
09:43 msvbhat joined #gluster
09:50 kdhananjay joined #gluster
09:57 aspandey joined #gluster
10:04 aj__ joined #gluster
10:07 msvbhat joined #gluster
10:08 Muthu_ joined #gluster
10:09 siada kshlm / post_factum: thanks for the assistance, cluster is finally back up and running correctly :D
10:10 siada ls -l
10:10 siada .. woops
10:15 siada ok it's.. mostly working, has something changed between 3.4 and 3.8 to do with symlinks? my gluster volumes have a symlink to a folder that exists on all nodes, however once everythings running under gluster the link is completey broken, when i ls -l the directory, the symlink just shows as "???????? ? ? ? ? <filename>" and I can't remove it or do anything with it, just get "input/output error"
10:18 karthik_ joined #gluster
10:19 pa9tv joined #gluster
10:20 pa9tv is there a way to force bitrot scrubbing to start now?
10:21 pa9tv i've tried:
10:21 pa9tv gluster volume bitrot vbox enable
10:21 pa9tv gluster volume bitrot vbox scrub-throttle aggressive
10:21 pa9tv gluster volume bitrot vbox scrub-frequency daily
10:21 pa9tv gluster volume bitrot vbox scrub resume
10:23 pa9tv the intention is, to have a scrub every day, scheduled by cron.
10:23 gem joined #gluster
10:25 itisravi joined #gluster
10:26 cloph urgh, if you don't trust your disk to that extent, what is the point of using them for storage?
10:28 ndevos pa9tv: kotreshhr might be able to help with your bitrot questions
10:29 pa9tv ndevos, kotreshhr is away i think.
10:29 pa9tv i'll hang a while.
10:29 ndevos pa9tv: maybe, but he'll likely responds when he comes back
10:29 pa9tv rgr!
10:30 ndevos cloph: it can be disk problems, cables, controller-firmware or even RAM errors - and probably many more
10:31 ndevos cloph: written to disk once, does not mean you will always read the same data at a later point in time
10:32 cloph oh, I don't deny that bitrot can happen, just that a daily scan just is too pessimistic from my POV - or otherwise put: if integrity of the data is so important, I'd rather have additional checksums that the client can check / won't rely on the storage to have correct data
10:32 ndevos it is not a very common issue, but with bigger disks the chances increase, and for archiving some bitrot detection is quite helpful to find failing storage
10:32 ndevos ah, yes, daily is maybe a little often
10:33 shdeng joined #gluster
10:33 pa9tv cloph,  ndevos, in any case a predictable moment when the scrub takes place is helpfull.
10:34 ndevos pa9tv: yes, I agree with that :)
10:35 cloph yeah, sorry for sidetracking (don't use it myself as underlying stuff is RAID10) - but what does the bitrot status say?
10:36 pa9tv v3.7.6 i cannot find bitrot status cmd.
10:38 cloph no "volume bitrot name-of-volume scrub status" command? /me has that in 3.7.11
10:39 pa9tv # gluster volume bitrot vbox scrub status
10:39 pa9tv Invalid scrub option for bitrot
10:41 sanoj joined #gluster
10:41 muneerse joined #gluster
10:44 luis_silva joined #gluster
10:52 rastar joined #gluster
11:06 jith_ joined #gluster
11:07 jith_ Hi all.. I have tried swift storage with glusterfs backend which ends in failure. Whether created glusterfs volume can be used as swift backend without swiftonfile?
11:08 Guest_83747 joined #gluster
11:09 Guest_83747 Allah is doingn
11:09 Guest_83747 left #gluster
11:12 jiffin jith_:  ppai can help you out
11:12 jith_ thanks jiffin
11:12 ppai jith_, hi
11:13 anoopcs cloph, pa9tv : bitrot scrub status command is only available from v3.7.7
11:13 ppai jith_, swiftonfile is suited for cases where you have an existing swift cluster and want glusterfs as one of the storage policy backends
11:13 ppai jith_, there is also gluster-swift which will provide swift object interface in glusterfs cluster
11:14 jith_ ppai: hi, ya i read that..so i can directly use  the glusterfs volume for swift backends without swiftonfile??
11:14 ppai jith_, yes you can
11:15 ppai jith_, provided you only use glusterfs to store objects
11:16 ppai jith_, the differences between two projects are listed here: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Object%20Storage/
11:16 glusterbot Title: Object Storage - Gluster Docs (at gluster.readthedocs.io)
11:17 jith_ ppai: thanks.. which will be better? if i am adding glusterfs to the existing swift i can go for swiftonfile.. right??
11:17 jith_ ppai.. thanks really
11:18 ppai jith_, if you have existing swift cluster with need for different backends i.e xfs, gluster, ceph then swiftonfile is the right choice
11:19 ppai jith_, if your primary way of consuming storage is exclusively via gluster, you can use gluster-swift. You don't need to maintain a swift cluster when using gluster-swift
11:19 jtux joined #gluster
11:19 hackman joined #gluster
11:20 jith_ ppai.. thanks.. so it means i dont want to install swift??
11:21 ppai jith_, well both projects have a dependency on swift. So you'd have to install swift packages
11:22 jith_ ppai: thanks without gluster-swift i cant do that?
11:22 deniszh1 joined #gluster
11:22 jith_ i dont understand the primary use of it.. i m new to this
11:24 ppai jith_, Ok. glusterfs has various access methods and protocols such as FUSE, NFS, SMB. Object is one of the access methods. Object access can be done over HTTP protocol using Swift API or S3 API.
11:24 ppai jith_, So the primary use is to let HTTP clients store and retrieve objects (file) in a glusterfs volume
11:27 jith_ ppai, i can access it from swift api also na.. i just wanted to configure it for glance backend
11:28 aj__ joined #gluster
11:29 ppai jith_, yes you can access it over swift API. I remember glance could directly use glusterfs as backend instead of having to go over swift
11:30 jith_ ppai, sorry if i am troubling..  for accessing the glusterfs volume other than swift i need gluster-swift.. any other purpose
11:30 jith_ ya i sa that in config file... so no need of swift in that?
11:31 jith_ ya i saw *
11:31 ppai jith_, yep for glance you don't need swift or gluster-swift or swiftonfile. glance can directly use glusterfs: https://www.rdoproject.org/storage/Glance/using-glusterfs-for-glance-with-rdo/
11:31 glusterbot Title: Using GlusterFS for Glance with RDO Liberty RDO (at www.rdoproject.org)
11:32 jri joined #gluster
11:32 jith_ ppai, in that case, glance backend will be file storage right not object storage(swift)
11:33 ppai jith_, correct
11:33 jith_ ppai, thanks for the guidance, if we want object storage i should go for swift with glusterfs??
11:34 ppai jith_, right. you have a choice between swift+swiftonfile and swift+gluster-swift
11:35 jith_ without gluster-swift or swiftonfile i cant use swift and glusterfs?? because i tried it which is not successful
11:36 shubhendu joined #gluster
11:38 ppai jith_, you can but that isn't straight forward.
11:40 jith_ ppai, I didnt use gluster-swift or swiftonfile.. I configured glusterfs in two machines(A and B) and created a glusterfs volume and mounted in machine C.. In third machine(C) i installed swift and added the mounted disk (glusterfs volume) in swift ring for storing objects..  It ends in failure
11:41 jith_ what should i do in this case??
11:42 ppai jith_, what fails ? how have you configured your rings ?
11:42 kovshenin joined #gluster
11:43 jith_ yes i configured rings with the mounted volume as device
11:44 ppai jith_, could share how you created the rings ?
11:45 jith_ sure.. but why did you say it is not straightforward.. sorry for the trouble
11:47 ppai jith_, I meant about generating ring files, placing the DBs and turning off replication
11:48 karnan joined #gluster
11:49 jith_ swift-ring-builder account.builder create 10 1 1
11:49 jith_ swift-ring-builder account.builder add r1z1-10.10.15.151:6002/data 100
11:49 jith_ swift-ring-builder account.builder rebalance
11:49 mchangir joined #gluster
11:50 ppai I'm assuming you have similarly build ring files for account, container and object. And glusterfs is FUSE mounted at path /data ?
11:51 jith_ ppai, but i have a doubt, i have mounted in /data directory.. used the same for creating ring files... should i have created some subdirectory under /data for creating swift rings?
11:51 jith_ ppai, yes same steps
11:51 jith_ yes yes
11:52 satya4ever joined #gluster
11:52 ppai jith_, that's good enough. You'd have to update 'devices' path in all account, container, object conf files too
11:53 ppai jith_, https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L8
11:53 glusterbot Title: swift/object-server.conf-sample at master · openstack/swift · GitHub (at github.com)
11:56 jith_ ppai, ya i did
11:56 ppai jith_, so effectively look for /srv/node/data which is where the glusterfs volume should be mounted
11:57 jith_ ppai, sorry the mounted path is /mnt/data
11:57 ppai jith_, if that's the case, use 'data' in ring and change 'devices' to '/mnt'
11:57 jith_ but i have created rings withonly data..
11:58 jith_ yes i did
11:58 ppai which is fine
11:58 jith_ what is /srv/node/data?
11:59 ppai that is the default path where swift will look for the mount point unless you change it
12:00 karnan joined #gluster
12:02 baojg joined #gluster
12:02 jith_ oh ok... where i have to specify it
12:03 jith_ so i should mount the glusterfs volume in /srv/node?
12:03 ppai where do you have glusterfs mounted ?
12:03 ppai Is it at "/mnt/data" ?
12:03 jith_ yes
12:03 jith_ exactly
12:04 ppai then just change 'devices' field in each of the 3 conf files (account, container, object) to '/mnt'
12:05 jith_ ppai: i did mount.glusterfs gluster-host1:/glustervol /mnt/data
12:05 jith_ it was there already
12:06 jith_ [DEFAULT]
12:06 jith_ bind_ip = 10.10.15.151
12:06 jith_ bind_port = 6002
12:06 jith_ # bind_timeout = 30
12:06 jith_ # backlog = 4096
12:06 jith_ user = root
12:06 jith_ swift_dir = /etc/swift
12:06 jith_ devices = /mnt
12:06 jith_ # mount_check = true
12:06 jith_ # disable_fallocate = false
12:06 jith_ this is part of account-server.conf..
12:07 ppai make the same changes to container-server.conf and object-server.conf
12:07 ppai and restart the swift services - swift-init main restart
12:08 jith_ ok if this steps are right.. i will try again with another setup... because i havent checked whether glusterfs is working right after creating it...
12:08 jith_ ppai, that i have did already
12:08 ppai jith_, alright
12:08 jith_ ppai, thanks for the guidance, i will do it again and will you know
12:08 ppai jith_, sure.
12:09 jith_ ppai, thanks
12:11 jith_ ppai, i hope u have got my requirement.. so why does gluster-swift is needed in this case.. i mean in which part..
12:12 ppai ppai, without gluster-swift, you'll have use sqlite DBs for account and container server.
12:12 ppai so your glusterfs volume will store objects and the DBs
12:13 ppai But when you use gluster-swift, you don't need to use DBs or manage ring files manually, but at the expense of slower listings
12:20 jith_ ppai, thanks
12:35 baojg joined #gluster
12:40 unclemarc joined #gluster
12:55 plarsen joined #gluster
12:57 pur joined #gluster
12:59 bwerthmann joined #gluster
12:59 baojg joined #gluster
13:04 mchangir joined #gluster
13:10 baojg joined #gluster
13:19 pdrakeweb joined #gluster
13:28 ctria joined #gluster
13:29 Pupeno joined #gluster
13:29 hgowtham joined #gluster
13:33 julim joined #gluster
13:36 jiffin joined #gluster
13:43 atalur_ joined #gluster
13:45 shyam joined #gluster
13:45 Bhaskarakiran joined #gluster
13:53 baojg joined #gluster
13:53 BitByteNybble110 joined #gluster
13:56 hagarth joined #gluster
14:04 bowhunter joined #gluster
14:09 archit_ joined #gluster
14:11 baojg joined #gluster
14:11 muneerse joined #gluster
14:13 ppai joined #gluster
14:18 msvbhat joined #gluster
14:20 rwheeler joined #gluster
14:25 BitByteNybble110 joined #gluster
14:27 ira joined #gluster
14:30 luis_silva joined #gluster
14:33 skylar joined #gluster
14:33 nbalacha joined #gluster
14:34 skylar left #gluster
14:37 skylar joined #gluster
14:38 baojg_ joined #gluster
14:39 side_control its it normal to have files in a constant state of 'possibly undergoing heal' ?
14:39 pdrakeweb joined #gluster
14:40 pur joined #gluster
14:42 post-factum side_control: for how long?
14:42 post-factum side_control: does "heal full" help?
14:43 Acinonyx joined #gluster
14:43 side_control post-factum: i think this time its about 14 hours, one file is in split-brain and can't seem to heal
14:43 post-factum side_control: is it in split-brain for real?
14:44 side_control the one file, recovered, but went back into split-brain 5 minutes later
14:44 ndevos side_control: I think it is normal for files that are currently having i/o, the possible split-brain gets detected when transactions are in progress
14:45 ndevos side_control: only when the transaction has finished, it is certain that the file is (not) in split-brain
14:45 post-factum side_control: rephrasing ndevos, is that file big and actively used?
14:45 side_control ndevos: again, its weird though, that some other files on other vols are impacted by this heal
14:45 side_control post-factum: atm, no
14:45 post-factum ndevos: we don't have heal progress indicator, do we?
14:46 ndevos side_control: that definitely sounds weird
14:46 ndevos post-factum: not that I am aware of
14:46 * ndevos normally passes self-heal questions on to itisravi, kd<tab> or pranithk
14:47 side_control im seeing two issues that been causing me a lot of problems..  gluster mounts will fail after X amount of time, causing the VMs to pause, i digress.. i'll just wait for a reply on the user-lsit
14:47 side_control i'll wait for this to heal and see what happens
14:48 post-factum side_control: which gluster version do you have?
14:48 side_control 3.8.1
14:51 post-factum side_control: is that replica 2 volume?
14:52 side_control replica 3 /w arbiter
14:52 post-factum side_control: hoho, looks like a bug then
14:52 dnunez joined #gluster
14:52 post-factum side_control: replica 3 arbiter 1 should not have split-brains at all
14:53 side_control hrmmm
14:54 side_control why is that?
14:54 post-factum why is what? why split brain happens?
14:54 side_control post-factum: no, why shouldnt splitbrain happen?
14:55 side_control if all nodes went offline, came back up, split-brain should happen but it should be able to heal itself no?
14:55 post-factum side_control: because volume should go read-only if server-side quorum is not met, and then no split-brains could happen
14:55 post-factum side_control: if it was "hard outage", and some parts were not replicated across nodes, then yes, it could happen
14:56 post-factum side_control: do you observe healing actually happens atm? like glusterfsd process cpu usage, traffic between hosts
14:56 wushudoin joined #gluster
14:57 post-factum glusterfs shd cpu usage as well
14:57 side_control post-factum: the infiniband gear i've upgraded to, has been, well having issues, connectivity was dropping like flies for a while
14:57 wushudoin joined #gluster
15:00 side_control post-factum: yea its definitely doing something
15:00 post-factum side_control: i guess you should wait then
15:04 side_control post-factum: welp, thanks for the info, we'll see how it goesw
15:13 atalur_ joined #gluster
15:13 hagarth joined #gluster
15:42 pdrakewe_ joined #gluster
15:43 hchiramm joined #gluster
16:01 msvbhat joined #gluster
16:19 cloph I wonder what a suitable features.shard-block-size is for replica 3 volume for multiple qemu disk-images in the 50-100GB range. the default of 4MB seems very low - what are the rules of thumb here?
16:20 mahdi joined #gluster
16:22 Gambit15 joined #gluster
16:26 mahdi Hi, GlusterFS with distributed replica using nfs-ganesha, im seeing lots of traffic on "lo" interface, is this normal ?
16:28 mahdi my network setup is two 10G bonded interface (LACP)
16:28 mahdi on the native nfs i do not see such a traffic on the loopback interface
16:32 post-factum cloph: 128–256M might be reasonable choice
16:36 cloph thx
16:38 minimicro joined #gluster
16:47 Acinonyx joined #gluster
16:50 bagualistico joined #gluster
16:54 Acinonyx joined #gluster
16:56 rafi joined #gluster
16:57 pramsky joined #gluster
17:07 Acinonyx joined #gluster
17:24 aj__ joined #gluster
17:25 pramsky hey everyone, i have a 2 node gluster in replicate mode, I copied a few GB of file to one of the nodes. But I do not see the disk usage of the brick on the second node match the first.The gluster mounts themselves update correctly.
17:26 pramsky the nodes are connected to each other via GB ethernet
17:27 bowhunter joined #gluster
17:32 gnulnx Anyone able to get the glusterfs fuse mount working on freebsd?  I loaded the 'fuse' module, wondering if I need any other modules / ports installed.
17:36 dnunez joined #gluster
17:38 msvbhat joined #gluster
17:42 jiffin joined #gluster
17:51 Pupeno joined #gluster
17:51 Pupeno joined #gluster
17:52 rafi joined #gluster
17:56 JoeJulian pramsky: gluster is not a replication service, it's a clustered filesystem. Do not write directly to the bricks, always mount the volume and write to that mount.
17:56 bwerthmann joined #gluster
17:57 JoeJulian gnulnx: as far as I know, fuse should be sufficient. Are you getting errors?
17:58 gnulnx JoeJulian: I was, yes, but a generic one.  I just got it working by calling mount_glusterfs as opposed to mount -t glusterfs
17:59 poornimag joined #gluster
17:59 JoeJulian +1
17:59 gnulnx It looks like fbsd has a preconfigured list of valid programs to automatically call when doing -t (e.g. /sbin/mount_$t)
18:00 gnulnx So I had to add -o mountprog=/sbin/mount_glusterfs
18:00 dnunez joined #gluster
18:01 jiffin joined #gluster
18:02 JoeJulian @learn freebsd as "Use \"-o mountprog=/sbin/mount_glusterfs\" instead of -t."
18:02 glusterbot JoeJulian: The operation succeeded.
18:02 JoeJulian @freebsd
18:02 glusterbot JoeJulian: Use "-o mountprog=/sbin/mount_glusterfs" instead of -t.
18:02 hchiramm joined #gluster
18:04 gnulnx JoeJulian: Do I have permission to edit that, or is there a restricted list?
18:05 JoeJulian Feel free
18:05 JoeJulian factoids are for everybody
18:05 Pupeno joined #gluster
18:05 julim joined #gluster
18:08 gnulnx @learn freebsd as "Mount gluster client: Use \"mount_glusterfs\" instead of \"mount -t glusterfs\".  gluster client in fstab example: \"192.168.110.1:/ftp /media/ glusterfs rw,mountprog=/sbin/mount_glusterfs,late 0 0\""
18:08 glusterbot gnulnx: The operation succeeded.
18:08 gnulnx @freebsd
18:08 glusterbot gnulnx: (#1) Use "-o mountprog=/sbin/mount_glusterfs" instead of -t., or (#2) Mount gluster client: Use "mount_glusterfs" instead of "mount -t glusterfs".  gluster client in fstab example: "192.168.110.1:/ftp /media/ glusterfs rw,mountprog=/sbin/mount_glusterfs,late 0 0"
18:09 Pupeno_ joined #gluster
18:10 pramsky JoeJulian, thats what I did, I mounted the glusterfs on one node and then wrote to it
18:11 msvbhat joined #gluster
18:12 JoeJulian Cool.
18:12 ahino joined #gluster
18:12 rafi1 joined #gluster
18:13 JoeJulian Which also means you could have theoretically done "mount -o mountprog=/sbin/mount_glusterfs 192.168.110.1:ftp /media"
18:16 side_control JoeJulian: just wanna say thanks for all your informative posts/blogs, its certainly helpful
18:16 JoeJulian You're welcome. Thank's for the feedback.
18:18 Gnomethrower joined #gluster
18:21 jiffin1 joined #gluster
18:23 rafi joined #gluster
18:29 side_control post-factum: i figured out what was happening....
18:29 side_control post-factum: one node, infiniband is dropping, and more files go under healing
18:30 side_control https://paste.fedoraproject.org/404490/
18:30 glusterbot Title: #404490 Fedora Project Pastebin (at paste.fedoraproject.org)
18:49 mrErikss1n joined #gluster
18:49 yosafbridge` joined #gluster
18:50 rossdm_ joined #gluster
18:50 shruti` joined #gluster
18:50 cvstealt1 joined #gluster
18:50 aj__ joined #gluster
18:50 d-fence__ joined #gluster
18:50 sac` joined #gluster
18:50 csaba1 joined #gluster
18:50 Dave__ joined #gluster
18:50 glustin joined #gluster
18:50 Nebraskka joined #gluster
18:50 lh joined #gluster
18:51 Nebraskka joined #gluster
18:53 om joined #gluster
19:13 deniszh joined #gluster
19:17 robb_nl joined #gluster
19:17 social joined #gluster
19:25 rafi joined #gluster
19:25 Pupeno joined #gluster
19:25 Pupeno joined #gluster
19:27 B21956 joined #gluster
19:46 Siavash joined #gluster
19:47 shruti` joined #gluster
19:47 sac` joined #gluster
19:47 csaba1 joined #gluster
19:47 glustin joined #gluster
19:52 gluco joined #gluster
20:16 aj__ joined #gluster
20:35 kpease joined #gluster
20:37 kpease joined #gluster
20:59 Jacob843 joined #gluster
21:09 repnzscasb joined #gluster
21:10 johnmilton joined #gluster
21:14 deniszh joined #gluster
21:20 Acinonyx joined #gluster
21:23 hackman joined #gluster
21:44 deniszh joined #gluster
21:51 wadeholler joined #gluster
21:59 kpease joined #gluster
22:09 deniszh joined #gluster
22:12 deniszh joined #gluster
22:19 julim joined #gluster
22:28 madm1ke I just simulated a completely failed node in a dispersed volume (2+1) with gluster 3.7. Now I cannot detach the old node (same name) because the volume contains a brick and I cannot replace-brick because the new node is not yet part of the cluster. How is that supposed to be handled ?
22:50 madm1ke replacing the new UUID with the old one in /var/lib/glusterd/glusterd.info seemed to have done the trick /o\
22:56 bkolden joined #gluster
23:33 pramsky JoeJulian, maybe i missed your response earlier during the net split, but i wrote files to the mounted gluster volume on a replicated 2 node cluster, one node shows the brick's disk usage as the same as the volume, other does not. I am wondering if this is expected behavior. Basically on node1, gluster volume and brick usage are 100gb, on node 2 the gluster volume is 100gb but the brick itself is 400Mb

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary