Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 kramdoss_ joined #gluster
01:40 alvinstarr joined #gluster
02:11 d0nn1e joined #gluster
02:19 shdeng joined #gluster
02:25 derjohn_mob joined #gluster
02:29 atm0s joined #gluster
02:33 gyadav joined #gluster
02:50 atm0s joined #gluster
03:03 Gambit15 joined #gluster
03:07 kramdoss_ joined #gluster
03:26 apandey joined #gluster
03:26 prasanth joined #gluster
03:30 atinm_ joined #gluster
03:40 koma joined #gluster
03:45 nbalacha joined #gluster
03:52 dominicpg joined #gluster
04:02 ashiq joined #gluster
04:12 gyadav joined #gluster
04:22 Shu6h3ndu joined #gluster
04:24 ashiq joined #gluster
04:40 itisravi joined #gluster
04:41 kdhananjay joined #gluster
04:45 jiffin joined #gluster
04:46 apandey joined #gluster
04:50 RameshN joined #gluster
04:54 Wizek_ joined #gluster
04:55 karthik_us joined #gluster
05:00 ankitr joined #gluster
05:02 buvanesh_kumar joined #gluster
05:08 RameshN joined #gluster
05:18 ndarshan joined #gluster
05:19 apandey joined #gluster
05:23 sona joined #gluster
05:24 apandey joined #gluster
05:25 matt_ joined #gluster
05:30 sanoj joined #gluster
05:38 ShwethaHP joined #gluster
05:39 riyas joined #gluster
05:40 rafi joined #gluster
05:42 Prasad joined #gluster
05:54 Saravanakmr joined #gluster
05:55 ankitr joined #gluster
06:03 susant joined #gluster
06:06 rafi1 joined #gluster
06:13 jiffin joined #gluster
06:22 susant joined #gluster
06:27 major boom
06:27 major $ sudo gluster snapshot create testvol0-2017031202 testvol0
06:27 major snapshot create: success: Snap testvol0-2017031202_GMT-2017.03.13-06.26.42 created successfully
06:27 major can create btrfs snapshots ..
06:27 major now just need to be able to bloody delete them
06:30 major /dev/sda1 /run/gluster/snaps/8cffb85cda654800a76e345cff283141/brick3 btrfs rw,relatime,space_cache,subvolid=290,subvol=/@8cffb85cda654800a76e345cff283141_0 0 0
06:31 derjohn_mob joined #gluster
06:35 rafi major: awesome work
06:35 atinmu joined #gluster
06:35 rafi major++
06:35 glusterbot rafi: major's karma is now 2
06:35 major glusterd_btrfs_snapshot_remove() is still borked .. it is based on my prior bad understanding of the whole process
06:36 major hoping to fix it before I pass out at the keyboard
06:37 rafi major: :)
06:38 rafi major: let me know if you need any help to understand the flow or any piece of code
06:39 jiffin major: are planning to collobrate with sriram's https://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1377437
06:39 glusterbot Title: Gerrit Code Review (at review.gluster.org)
06:41 major jiffin, my first commit is based on a git branch of his that I found .. but the code was .. sort of mangled
06:41 major I split it out into LVM cleanups and left the ZFS portion on the side as .. at least the code I found .. was incomplete and generally not usable
06:41 major still .. my first commit is based on what I found of theirs
06:43 major jiffin, yah .. I think that URL is closely related to what I found in the mailing lists and on github
06:44 major anyway .. mostly all I was trying to do was cleanly split out the LVM code into nice neat little compartments .. got it down to 4 functions
06:44 major or 5 .. I forget
06:44 major it has been a long weekend
06:48 ashiq joined #gluster
06:51 karthik_us joined #gluster
06:54 mb_ joined #gluster
06:58 major ouch .. no way to find the brick number from the brickinfo?
07:01 dominicpg joined #gluster
07:03 rastar joined #gluster
07:07 karthik_us joined #gluster
07:08 mb_ joined #gluster
07:08 om2 joined #gluster
07:10 jiffin major: what u mean by brick no?
07:10 level7 joined #gluster
07:10 major the brickcount I suppose
07:11 major I had to use it for the subvol name when dealing with bricks for the same volume on the same server
07:11 major I suppose I can parse it out of the mnt_opts for that brick
07:13 jiffin major: it show should be part of volinfo I guess
07:13 jkroon joined #gluster
07:14 mhulsman joined #gluster
07:17 major yah .. I dunno if the data I used was consistent .. I think I am likely reinventing a solution to a similar problem that was already solved ...
07:17 major just gonna parse it out of the mnt_opts with a big fat note that it needs to be revisted..
07:24 jtux joined #gluster
07:24 major actually .. I don't need to bother ..
07:25 major all I need is a uniq name ..
07:25 major pretty certain there is function for what I need..
07:26 ankush joined #gluster
07:29 major hurm .. what generates the snapname?
07:35 major hah
07:36 Philambdo joined #gluster
07:37 [diablo] joined #gluster
07:37 p7mo joined #gluster
07:43 [diablo] joined #gluster
07:46 rafi1 joined #gluster
07:53 Philambdo joined #gluster
08:15 ankush joined #gluster
08:16 Humble joined #gluster
08:19 atinm joined #gluster
08:23 mb_ joined #gluster
08:33 ankush joined #gluster
08:35 prasanth joined #gluster
08:35 ankitr joined #gluster
08:37 Seth_Karlo joined #gluster
08:41 aravindavk joined #gluster
08:41 mbukatov joined #gluster
09:00 ankitr joined #gluster
09:04 ankitr joined #gluster
09:12 jwd joined #gluster
09:20 major woot!
09:20 major luser@node0:~$ sudo gluster snapshot create testvol0-2017031205 testvol0
09:20 major snapshot create: success: Snap testvol0-2017031205_GMT-2017.03.13-09.19.40 created successfully
09:20 major luser@node0:~$ grep 'gluster/snap' /proc/mounts
09:20 major /dev/sda1 /run/gluster/snaps/fbb65d51978941ffa1bce9885b4efe02/brick3 btrfs rw,relatime,space_cache,subvolid=294,subvol=/@fbb65d51978941ffa1bce9885b4efe02_0 0 0
09:20 major luser@node0:~$ sudo gluster snapshot delete testvol0-2017031205_GMT-2017.03.13-09.19.40
09:20 major Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
09:20 major snapshot delete: testvol0-2017031205_GMT-2017.03.13-09.19.40: snap removed successfully
09:20 major luser@node0:~$
09:20 major and now I can sleep
09:27 derjohn_mob joined #gluster
09:35 samppah_ major: nice :O
09:35 Seth_Karlo joined #gluster
09:36 Seth_Karlo joined #gluster
09:37 Seth_Kar_ joined #gluster
09:39 Ashutto joined #gluster
09:43 xiu joined #gluster
09:44 xiu hi, is it possible to do some uid/gid mapping on the client side using mount.glusterfs? I have a user on multiple servers that needs access to the volume but they all have a different uid/gid
09:51 ashiq joined #gluster
09:53 Ashutto Hello, i have a strange issue with my replicated distributed volume. Many files are "multiple" (e.g. appears 2 times in the directory, one have a non zero bytes content, the other has a ------T permission and 0 bytes). I have to delete files from each brick (clearing the hardlinks too) in order to fix this issue. Directory containing such files are undeletable (rm -rf) with error "Directory not empty", even trying to delete the directory multiple times.
09:53 Ashutto I've created some nopaste (https://nopaste.me/view/1ee13a63 https://nopaste.me/view/80ac1e13 https://nopaste.me/view/eafb0b44) containing the full getfattr of the directory and its content plus the volume status and an ls -l with client debug log active. Do you have any idea?
09:53 glusterbot Ashutto: ----'s karma is now -5
09:53 glusterbot Title: LS Debug log - Nopaste.me (at nopaste.me)
09:54 Ashutto why -5 ?
09:54 Ashutto @help karma
09:54 glusterbot Ashutto: (karma [<channel>] [<thing> ...]) -- Returns the karma of <thing>. If <thing> is not given, returns the top N karmas, where N is determined by the config variable supybot.plugins.Karma.rankingDisplay. If one <thing> is given, returns the details of its karma; if more than one <thing> is given, returns the total karma of each of the things. <channel> is only necessary if the message
09:54 glusterbot Ashutto: isn't sent on the channel itself.
09:55 ndevos Ashutto: nobody likes ----
09:55 glusterbot ndevos: --'s karma is now -5
09:55 ndevos or -- for that matter
09:55 major Heh
09:55 Ashutto damn... i didn't know that. What karma implies?
09:56 ndevos things that you dont like get negative karma points, if someone (or thing) is helpful, you can give it positive karma
09:57 major Nothing.. more like a parse error in this case.. hard to believe the karma code even looks at those tokens
09:57 ndevos like major++ is working on making snapshots on btrfs work
09:57 glusterbot ndevos: major's karma is now 3
09:58 major <----- arrows are caught in the parser too
09:58 glusterbot major: <---'s karma is now -1
09:58 Ashutto ok ok, i got the point :) thanks
09:58 ndevos not all arros <=====
09:58 Ashutto any idea about my sad issue? :(
09:58 ndevos s/arros/arrows/
09:58 glusterbot What ndevos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
09:58 glusterbot ndevos: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
09:58 major But only left ones... right ones are fine..
09:58 ndevos and the bot is a bit broken too
09:59 major Heh
10:01 ndevos Ashutto: you seem to have an issue with link-to files, they are created for hardlinks, on renames (and maybe more)
10:03 Ashutto yes, it is possible, someone else said that exact same thing
10:03 ndevos Ashutto: it is something related to DHT, and I think I've seen this problem before... maybe someone working on DHT can help you out
10:04 Ashutto who could that be ?
10:04 ndevos unfortunately I do not see any of the usual ones online
10:05 Ashutto :'(
10:05 ndevos you're probably better of sending an email to gluster-users@gluster.org, include details about your environment, versions and such
10:05 aardbolreiziger joined #gluster
10:06 glusterbot` joined #gluster
10:06 glusterbot` joined #gluster
10:06 glusterbot` joined #gluster
10:06 glusterbot` joined #gluster
10:06 glusterbot` joined #gluster
10:07 glusterbot` joined #gluster
10:07 glusterbot` joined #gluster
10:07 glusterbot` joined #gluster
10:07 ndevos JoeJulian: uhm, glusterbot is a little misbehaving?
10:07 glusterbot` joined #gluster
10:07 glusterbot` joined #gluster
10:07 koma joined #gluster
10:08 glusterbot` joined #gluster
10:08 17SAAJVPY joined #gluster
10:08 Ashutto already done :D
10:08 gl7sterbot joined #gluster
10:08 major Tomorrow start cleaning all this up and figure out how to add a config dict for a brick for the subvol prefix...
10:08 glusterbot` joined #gluster
10:08 glusterbot` joined #gluster
10:08 glusterbot` joined #gluster
10:08 major Then add zfs
10:08 94KAAFCP7 joined #gluster
10:09 glusterbot` joined #gluster
10:09 glusterbot` joined #gluster
10:09 glusterbot joined #gluster
10:11 glusterbot` joined #gluster
10:11 glusterbot joined #gluster
10:12 glusterbot joined #gluster
10:13 glusterbot joined #gluster
10:15 glusterbot joined #gluster
10:17 glusterbot joined #gluster
10:19 glusterbot joined #gluster
10:21 glusterbot joined #gluster
10:23 glusterbot joined #gluster
10:25 glusterbot joined #gluster
10:25 msvbhat joined #gluster
10:27 glusterbot joined #gluster
10:28 level7 joined #gluster
10:28 glusterbot joined #gluster
10:30 glusterbot joined #gluster
10:31 sanoj joined #gluster
10:31 glusterbot joined #gluster
10:33 glusterbot joined #gluster
10:33 gyadav joined #gluster
10:35 aardbolreiziger joined #gluster
10:35 glusterbot joined #gluster
10:37 glusterbot joined #gluster
10:38 glusterbot joined #gluster
10:40 glusterbot joined #gluster
10:41 glusterbot joined #gluster
10:43 glusterbot joined #gluster
10:45 glusterbot joined #gluster
10:47 glusterbot joined #gluster
10:49 glusterbot joined #gluster
10:49 hybrid512 joined #gluster
10:51 glusterbot joined #gluster
10:52 bfoster joined #gluster
10:53 glusterbot joined #gluster
10:55 glusterbot joined #gluster
10:57 glusterbot joined #gluster
10:58 aardbolreiziger joined #gluster
10:58 glusterbot joined #gluster
11:01 glusterbot joined #gluster
11:03 glusterbot joined #gluster
11:04 glusterbot joined #gluster
11:07 glusterbot joined #gluster
11:08 bjoern_ joined #gluster
11:08 glusterbot joined #gluster
11:09 bjoern_ Hi all,
11:09 glusterbot joined #gluster
11:10 bjoern_ I'm considering using glusterfs on my 2 Debian boxes. With a fresh install I'm OK already. However: I'm planning to use 2 machines on 2 differnt locations. Bandwith is quite low. So geo replication is the on I guess...
11:11 glusterbot joined #gluster
11:11 bjoern_ Now I can't find geo replication for download.
11:11 bjoern_ glusterfs 3.10 comes without. right?
11:12 glusterbot joined #gluster
11:12 ashiq joined #gluster
11:13 glusterbot joined #gluster
11:14 glusterbot joined #gluster
11:15 glusterbot joined #gluster
11:16 MrAbaddon joined #gluster
11:16 glusterbot joined #gluster
11:18 glusterbot joined #gluster
11:19 jiffin xiu: currently it is not possible
11:19 jiffin need to maintain same uid/gid for clients and servers
11:20 glusterbot joined #gluster
11:20 cloph bjoern_: geo-replication is builtin, it is not extra download or separate install.
11:21 jiffin major++ superb work
11:21 glusterbot` jiffin: major's karma is now 5
11:21 glusterbot joined #gluster
11:23 glusterbot joined #gluster
11:24 bjoern_ cloph: OK. When I try to follow how-to, I shall invoke something like: "gluster-mountbroker setup"
11:24 bjoern_ or: gluster-georep-sshkey generate
11:24 bjoern_ system cannot find those commands.
11:25 glusterbot joined #gluster
11:26 glusterbot joined #gluster
11:26 cloph dpkg -S gluster-georep-sshkey
11:26 cloph glusterfs-common: /usr/sbin/gluster-georep-sshkey
11:26 cloph gluster docs assume super-user/root privs/path setup...
11:27 skoduri joined #gluster
11:27 glusterbot joined #gluster
11:28 bjoern_ I've got a fresh installed debian + gluster install from apt-sources. glusterfs-common is already newest version...
11:28 bjoern_ ls -1 /usr/sbin/gluster*
11:28 bjoern_ /usr/sbin/gluster
11:28 bjoern_ /usr/sbin/glusterd
11:28 bjoern_ /usr/sbin/glusterfs
11:28 glusterbot joined #gluster
11:28 bjoern_ /usr/sbin/glusterfsd
11:28 bjoern_ * apt-sources from gluster.org project page (SRY)
11:29 cloph gluster 3.5 as of debian itself is outdated/wouldn't use that for geo-replication. So what version did you have installed then?
11:29 glusterbot joined #gluster
11:30 bfoster joined #gluster
11:30 glusterbot joined #gluster
11:31 Shu6h3ndu joined #gluster
11:32 bjoern_ debian 8 (latest jessie. fresh; a few days old ) + glusterfs-common, -server, -client from  https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/jessie/ ...
11:32 glusterbot` Title: Index of /pub/gluster/glusterfs/LATEST/Debian/jessie (at download.gluster.org)
11:32 bjoern_ 3.10
11:32 glusterbot joined #gluster
11:33 cloph and what does dpkg -l glusterfs-common read?
11:33 glusterbot joined #gluster
11:33 bjoern_ ii  glusterfs-common                                              3.10.0-1                            amd64                               GlusterFS common libraries and translator modules
11:34 cloph -2 is current, so try that, I think someone complained about some symlinks being missing in the package, and that's likely fixed with the -2 one..
11:34 glusterbot joined #gluster
11:35 glusterbot joined #gluster
11:36 bjoern_ OK Checking...
11:36 glusterbot_ joined #gluster
11:38 glusterbot joined #gluster
11:38 bjoern_ so it just symlinked to a .py right?
11:38 nishanth joined #gluster
11:38 cloph yes
11:39 bjoern_ gluster-georep-sshkey
11:39 bjoern_ Traceback (most recent call last):
11:39 bjoern_ File "/usr/sbin/gluster-georep-sshkey", line 28, in <module>
11:39 glusterbot joined #gluster
11:39 bjoern_ from prettytable import PrettyTable
11:39 bjoern_ ImportError: No module named prettytable
11:39 cloph poor glusterbot...
11:39 bjoern_ here we are!!! Thank you!
11:39 bjoern_ cloph++
11:39 glusterbot` bjoern_: cloph's karma is now 4
11:39 glusterbot bjoern_: cloph's karma is now 5
11:39 cloph you're welcome :-)
11:39 mb_ joined #gluster
11:40 glusterbot joined #gluster
11:41 glusterbot joined #gluster
11:42 cloph (be aware that geo-replication still has some probs with symlinks in some scenarios, like e.g. https://bugzilla.redhat.com/show_bug.cgi?id=1431081 - and also initial sync of a existing volume with data that changes a lot is quite slow, much better when there's no backlog to process...)
11:42 glusterbot` Bug 1431081: high, unspecified, ---, bugs, NEW , symlinks trigger faulty geo-replication state (rsnapshot usecase)
11:42 glusterbot Bug 1431081: high, unspecified, ---, bugs, NEW , symlinks trigger faulty geo-replication state (rsnapshot usecase)
11:42 glusterbot glusterbot`: -'s karma is now -362
11:42 glusterbot` glusterbot: -'s karma is now -363
11:42 glusterbot joined #gluster
11:44 arpu joined #gluster
11:44 glusterbot joined #gluster
11:45 glusterbot joined #gluster
11:47 glusterbot joined #gluster
11:47 ankitr joined #gluster
11:49 glusterbot joined #gluster
11:51 glusterbot joined #gluster
11:53 glusterbot joined #gluster
11:55 glusterbot joined #gluster
11:57 glusterbot joined #gluster
11:59 glusterbot joined #gluster
11:59 BR_ cloph: ok Thanks. With low-bandwidth: any other proposals? can I create a "mirror" with sync in background in another way? Another (important) thing is: limiting of used bandwidht. Without it's poisoning bandwidth of local internet connection...
12:01 glusterbot joined #gluster
12:02 cloph I'd limit bandwidth in your router...
12:03 glusterbot joined #gluster
12:05 glusterbot joined #gluster
12:06 BR_ good point. let me consider about...
12:07 glusterbot joined #gluster
12:09 glusterbot joined #gluster
12:10 glusterbot joined #gluster
12:10 ankush joined #gluster
12:11 glusterbot joined #gluster
12:13 glusterbot joined #gluster
12:15 glusterbot joined #gluster
12:17 glusterbot joined #gluster
12:17 Can joined #gluster
12:19 glusterbot joined #gluster
12:21 glusterbot joined #gluster
12:23 glusterbot joined #gluster
12:25 glusterbot joined #gluster
12:27 glusterbot joined #gluster
12:28 aardbolreiziger joined #gluster
12:28 glusterbot joined #gluster
12:29 glusterbot joined #gluster
12:30 ashiq joined #gluster
12:30 Can Hi all. I have two cluster that replicates each other folder. While a file replication from one server to other one, how can I see the replication status
12:30 Ashutto Is there a way to share nfs-ganesha filehandle in a two nodes cluster? I'm following che guide on readthedocs, but i think it is not aligned to gluster 3.10
12:32 pulli joined #gluster
12:32 glusterbot joined #gluster
12:32 anbehl joined #gluster
12:34 glusterbot joined #gluster
12:35 glusterbot joined #gluster
12:36 cloph Can: can you be more specific as to what you want to see? in geo replication status you can set checkpoints, to see when the current state has been reached. You cannot see what file is being synced at the very same moment though.
12:37 kpease joined #gluster
12:37 glusterbot joined #gluster
12:37 glusterbot joined #gluster
12:38 kpease_ joined #gluster
12:39 glusterbot joined #gluster
12:39 pulli joined #gluster
12:41 glusterbot joined #gluster
12:43 glusterbot joined #gluster
12:45 glusterbot joined #gluster
12:45 Can cloph: I want to see status of gluster volume, if there is any file synchronization or replication currently. Not geo replication. Im a newbie of gluster
12:45 cloph then your terminology is wrong.
12:45 cloph You cannot use replicate between two different clusters.
12:47 cloph a cluster is a set of peers (or servers) that can participate in quorum and volumes, but peers from Cluster "A" won't talk to peers from Cluster "B"
12:47 glusterbot joined #gluster
12:47 cloph and for replica volumes, the writes are done on all nodes simultaneously.
12:48 cloph In case a brick is down, the healing would take place once it is back.
12:48 glusterbot joined #gluster
12:49 glusterbot joined #gluster
12:50 Can Its more clear now, got it, thanks cloph
12:51 unclemarc joined #gluster
12:51 glusterbot joined #gluster
12:53 glusterbot joined #gluster
12:56 glusterbot joined #gluster
12:56 pulli joined #gluster
12:57 glusterbot joined #gluster
13:00 glusterbot joined #gluster
13:00 jtux joined #gluster
13:01 jiffin Ashutto: you are right, nfs-ganesha doc is it outdated
13:02 glusterbot joined #gluster
13:02 Ashutto Hi jiffin
13:02 Ashutto is there some place where i can get some updated scratches ?
13:02 Ashutto I'm in need to setup a ganesha HA server to proxy our glusterfs
13:03 glusterbot joined #gluster
13:03 Ashutto and i'm a little in trouble given that there is no valid docs :(
13:05 xiu jiffin: ok thanks
13:05 glusterbot joined #gluster
13:06 glusterbot joined #gluster
13:07 aravindavk joined #gluster
13:08 glusterbot joined #gluster
13:09 jiffin Ashutto: is it okay for u if I update doc by end of this week?
13:09 glusterbot joined #gluster
13:09 jiffin Ashutto: i will try to do it ASAP
13:10 Ashutto absolutely. thanks. Can you just give me some pointers to begin ? the gnaehsa-ha.sh is not working ...
13:11 Ashutto (if you want, i can proofread it :D )
13:11 glusterbot joined #gluster
13:12 jiffin Ashutto: how to do u want to set up ganesha with gluster, with HA or not?
13:12 glusterbot joined #gluster
13:12 Ashutto I need it to be HA
13:13 Ashutto specifically, I have 2 ganshesa nfs that are NOT part of my main gluster gluster and are supposed to be the proxy of several gluster cluster
13:13 plarsen joined #gluster
13:13 glusterbot joined #gluster
13:14 jiffin Ashutto: do u want to keep whole ganesha cluster out of gluster?
13:14 Ashutto jiffin, that's the idea (as it should partecipate to multiple clusters)
13:15 Ashutto but i'm pretty open to any option
13:15 glusterbot joined #gluster
13:15 jiffin is it okay to ganesha nodes to be part of Trusted storage pool (without brick process running on it)
13:15 Ashutto and i'm not afraid of changing the architecture should it be faulty by segind :D
13:15 Ashutto *by design
13:16 jiffin Ashutto: right HA intergration tightly coupled with glusterd(not with brick)
13:16 glusterbot joined #gluster
13:16 Ashutto jiffin, sounds good to me
13:17 jiffin we planning to move it entirely from glusterd on 3.11(by storhaug) project
13:17 jiffin Ashutto: okay
13:17 glusterbot joined #gluster
13:17 jiffin how many nodes do u have?
13:18 Ashutto 5 nodes in one cluster, 3 in another one
13:18 Ashutto (glusters)
13:18 glusterbot joined #gluster
13:18 Ashutto while I have 2 nodes for ganehsa
13:18 jiffin then u have two nodes for ganesha
13:19 jiffin I guess u already have 5node cluster and 3 node cluster running , right?
13:19 Ashutto yes
13:19 glusterbot joined #gluster
13:19 Ashutto glusters are already operating
13:19 jiffin Ashutto: then lets talk about 2 node ganesha
13:19 Ashutto yup
13:20 jiffin install all required packages
13:20 jiffin including ganesha and gluster
13:20 jiffin on those two nodes
13:20 Ashutto done
13:20 glusterbot joined #gluster
13:20 Ashutto same version of the higher of the clusters
13:21 Ashutto (3.10)
13:21 jiffin follow the prequistes in http://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/
13:21 glusterbot` Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.io)
13:21 jiffin what abt ganesha version?
13:21 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.io)
13:21 jiffin 2.4 or 2.3?
13:21 Ashutto jiffin, done.  nfs-ganesha-2.4.3-1.el7.x86_64
13:21 jiffin great
13:21 glusterbot_ joined #gluster
13:21 jiffin Ashutto: did u enable shared storage?
13:22 jiffin it is up and running , right
13:22 Ashutto yes (but i'd like to replace the bricks)
13:22 jiffin shared storage?
13:22 Ashutto yes
13:22 Ashutto actually the shared storage insists on local storage
13:22 glusterbot joined #gluster
13:23 Ashutto i have 2 more disks that i'd like to assign as bricks for shared_storage volume
13:23 jiffin Ashutto: okay
13:23 Ashutto may i replace them ?
13:23 jiffin but shared storage works in such a way that bricks are in /var/run/gluster
13:24 glusterbot joined #gluster
13:24 jiffin don't know whether it will impact anything
13:24 jiffin it won't consumes a lot of data
13:24 Ashutto ok, let's keep them that way
13:24 jiffin it is mounted on both nodes , right?
13:24 Ashutto (we are dealing about 4k fops)
13:25 Ashutto yes, correct
13:25 cloph joined #gluster
13:25 glusterbot joined #gluster
13:25 jiffin Ashutto: shared storage is use to store ganesha configuration
13:26 jiffin (kinda of management part)
13:26 glusterbot joined #gluster
13:26 jiffin it won't effect the actual i/o
13:26 Ashutto ok
13:26 rafi1 joined #gluster
13:26 jiffin Ashutto: now create nfs-ganesha directory on shared storage
13:27 jiffin copy ganesha.conf and ganesha-ha.conf to that directory
13:27 glusterbot joined #gluster
13:28 skoduri joined #gluster
13:28 Ashutto the script zeroed my ganesha.conf. i'm recreating that. wait a sec pls
13:28 glusterbot joined #gluster
13:29 shyam joined #gluster
13:29 Ashutto (thanks God we have Puppet...)
13:29 Ashutto created
13:29 glusterbot joined #gluster
13:30 jiffin if possible can also list the packages u have installed?
13:31 jiffin u have created ganesha-ha.conf in "nfs-ganesha" , right?
13:31 Ashutto https://nopaste.me/view/f8732762
13:31 glusterbot joined #gluster
13:31 glusterbot` Title: Packages - Nopaste.me (at nopaste.me)
13:31 jiffin if possible can u share me that file as well
13:31 Ashutto sure
13:31 Ashutto wait
13:31 jiffin looks good
13:32 Ashutto https://nopaste.me/view/8f6fab13
13:32 glusterbot` Title: ganesha.conf - Nopaste.me (at nopaste.me)
13:32 glusterbot joined #gluster
13:32 glusterbot Title: ganesha.conf - Nopaste.me (at nopaste.me)
13:33 jiffin Ashutto: i asked about ganesh-ha.conf
13:33 Ashutto sorry, my bad
13:33 jiffin *ganesha-ha.conf
13:33 glusterbot joined #gluster
13:33 jiffin that's fine
13:34 jiffin IMO it is better to remove export blocks from ganesha.conf for the time being
13:34 jiffin we will come to that latter
13:35 Ashutto https://nopaste.me/view/68718365
13:35 glusterbot` Title: ganesha-ha.conf - Nopaste.me (at nopaste.me)
13:35 glusterbot joined #gluster
13:35 glusterbot Title: ganesha-ha.conf - Nopaste.me (at nopaste.me)
13:35 Ashutto all exports has been removed
13:35 jiffin Ashutto: it is bit confusing
13:36 jiffin HA_CLUSTER_NODES="ganesha01.internal.lan,ganesha02.internal.lan"
13:36 jiffin #
13:36 jiffin # Virtual IPs for each of the nodes specified above.
13:36 jiffin VIP_ganesha01.farm-rcs.it="10.18.2.66"
13:36 jiffin VIP_ganesha02.farm-rcs.it="10.18.2.67"
13:36 glusterbot joined #gluster
13:36 Ashutto ops... i should have censored that
13:36 jiffin both should be same
13:36 jiffin okay
13:36 Ashutto too bad... the names are what you see in the vip
13:37 Ashutto i censored just 2 occurrences...
13:37 glusterbot joined #gluster
13:37 Ashutto they are the same on the file
13:37 jiffin Ashutto: is it fresh set up?
13:37 jiffin did u try to bring ganesha cluster here before?
13:37 Ashutto yes
13:37 Ashutto 2 times
13:38 Ashutto we can fresh start if you ask
13:38 jiffin so it is better perform clean up beforing setting uop
13:38 glusterbot joined #gluster
13:38 Ashutto I have no working setu
13:38 jiffin it;s okay
13:38 Ashutto ok, how?
13:38 jiffin run the following in both nodes
13:39 TBlaar I'm having serious issues with the gluster repo changing all the time....
13:39 TBlaar yesterday this worked:
13:39 glusterbot joined #gluster
13:39 squizzi joined #gluster
13:39 jiffin Ashutto: /usr/libexec/ganesha/ganesha-ha.sh --teardown <path to shared storage>/nfs-ganesha
13:40 sona joined #gluster
13:40 glusterbot joined #gluster
13:41 TBlaar so from yesterday to today, getting the key changed from http to forced https
13:41 Ashutto no such option
13:41 jiffin can u also check value for nfs-ganesha in  /var/lib/glusterd/options ?
13:41 Ashutto jiffin, there is no teardown option
13:41 skylar joined #gluster
13:41 glusterbot joined #gluster
13:41 jiffin Ashutto: seriously
13:42 jiffin can u grep for taerdown in that script
13:42 jiffin ?
13:42 jiffin teardown
13:42 Ashutto sorry... my bad
13:42 rafi joined #gluster
13:42 Ashutto done on both nodes.
13:42 glusterbot joined #gluster
13:42 jiffin Ashutto: can u also check value for nfs-ganesha in  /var/lib/glusterd/options
13:42 Ashutto nfs-ganesha=enable in optons file
13:43 jiffin make it to nfs-ganesha=disable and restart glusterd
13:43 jiffin on both node
13:43 jiffin s
13:44 baber joined #gluster
13:44 glusterbot joined #gluster
13:44 jiffin so have enabled firewalld in ur setup?
13:45 unclemarc joined #gluster
13:45 jiffin Ashutto: have u enabled firewalld in ur setup?
13:45 glusterbot joined #gluster
13:45 Ashutto no firewall
13:45 jiffin Ashutto: okay
13:45 jiffin did glusterd restart worked?
13:45 Ashutto glusterd restarted
13:46 Ashutto up and running
13:46 jiffin Ashutto: cool
13:46 jiffin run gluster nfs-ganesha enable
13:46 Ashutto (nodes are not yet part of the trusted pool)
13:46 glusterbot joined #gluster
13:46 jiffin u mean these nodes
13:46 Ashutto do i need to attach to the trusted pool they will be proxying from ?
13:46 jiffin ?
13:46 buvanesh_kumar joined #gluster
13:46 jiffin no I am thinnking of another way
13:47 Ashutto nodes are in "their" trusted pool (the know each other)
13:47 jiffin did u create TSP with these two nodes
13:47 jiffin ?
13:47 Ashutto yes
13:47 glusterbot joined #gluster
13:47 jiffin okay great
13:48 jiffin then run gluster nfs-ganesha enable
13:48 glusterbot joined #gluster
13:48 jiffin i will be back in 2mins
13:49 MikeLupe joined #gluster
13:49 Ashutto fail
13:49 Ashutto waiting :)
13:50 glusterbot joined #gluster
13:50 Shu6h3ndu joined #gluster
13:52 glusterbot joined #gluster
13:53 jiffin Ashutto: failed?
13:53 glusterbot joined #gluster
13:53 Seth_Karlo joined #gluster
13:53 Ashutto correct
13:53 Ashutto reenabling now
13:53 jiffin Ashutto: no
13:53 jiffin wai
13:53 jiffin t
13:53 Ashutto ok
13:53 jiffin lets check the errors
13:54 jiffin can u check /var/log/messages and grep for "pcs cluster"
13:54 glusterbot joined #gluster
13:54 Shu6h3ndu_ joined #gluster
13:55 Ashutto there are not much informations...
13:56 glusterbot joined #gluster
13:56 nbalacha joined #gluster
13:58 glusterbot joined #gluster
13:58 Ashutto thanks jiffin, running it manually resulted in "success"
13:58 jiffin Ashutto: hmm that's bit wiered
13:59 Ashutto totally agree
13:59 jiffin via script it got failed, manualy it worked
13:59 glusterbot joined #gluster
13:59 jiffin can run the teardown command again
13:59 Ashutto sure
13:59 jiffin on both nodes
14:00 jiffin then try following
14:00 jiffin pcs cluster auth <server1> <server2>
14:00 Ashutto done
14:00 glusterbot joined #gluster
14:01 jiffin pcs cluster setup  --name ganesha-ha --transport udpu <server1> <server2>
14:01 jiffin --name <ha name>
14:01 glusterbot joined #gluster
14:02 Ashutto success
14:02 jiffin okay then perform teardown on both nodes
14:02 glusterbot joined #gluster
14:03 Ashutto done
14:03 jiffin then run lets retry with cli
14:03 jiffin gluster nfs-ganesha enable
14:04 Ashutto it is working...it hanged on "please wait.."
14:04 Ashutto :D
14:04 glusterbot joined #gluster
14:04 sanoj joined #gluster
14:04 jiffin okay let it complete
14:04 Ashutto sur
14:04 Ashutto surr
14:04 Ashutto sure
14:04 Ashutto success
14:04 jiffin \o/
14:04 Ashutto jiffin, next coffe is on me :D
14:04 jiffin check pcs status and make sure cluster is running fine
14:04 Ashutto it seems so
14:05 jiffin Ashutto: sure :D
14:05 glusterbot joined #gluster
14:05 Ashutto ok, now i'm a little confused.
14:05 jiffin now second part is bit tricky since ur ganesha cluster away from volume cluster
14:05 Ashutto 1) why 1 vip for each node?
14:06 jiffin Ashutto: do u mean assign multiple vips for each nodes?
14:06 glusterbot joined #gluster
14:07 Humble joined #gluster
14:07 Ashutto we have 2 nodes, each of them has an ip plus a vip (so we have 2 vip-s for this service)
14:07 Ashutto so... what should i use on clients?
14:07 jiffin Ashutto: okay
14:07 jiffin u should use vips
14:07 jtux joined #gluster
14:07 jiffin then only faliover will work
14:07 Ashutto both?
14:07 glusterbot joined #gluster
14:07 jiffin for client
14:08 jiffin can u mount the client with any of vip's
14:08 jiffin ?
14:08 Ashutto i mean...i have to "distribute" even ip to clients?
14:08 jiffin no ip is hidden for client
14:08 jiffin s
14:08 glusterbot joined #gluster
14:08 jiffin only we need to give vip's
14:08 Ashutto there are no policy restriction, but we haven't exported a volume
14:09 jiffin Ashutto: there is no policy restrictions
14:09 Ashutto ok, i mean... every client should use a vip. there are 2 vip, so i should distribute them evenly...right?
14:09 Ashutto like, vip1 to even clients, vip2 to odds clients
14:09 glusterbot joined #gluster
14:09 jiffin Ashutto: yes that's betterway
14:10 Ashutto ok, nice
14:10 Ashutto let's continue :D
14:10 jiffin but there is no restrictions from our side
14:10 Ashutto yes
14:10 Ashutto i was referring to my network part :D
14:10 jiffin Ashutto: cool
14:10 jiffin now we need to export a volume
14:10 glusterbot joined #gluster
14:10 Ashutto yes
14:11 jiffin which might be trickier for u
14:11 jiffin ur case
14:11 jiffin we have a ganesha cluster and another two clusters with volumes running on it
14:11 Ashutto correct
14:12 jiffin first of all u need to turn on features.cache-invalidation for volumes which u want to export
14:12 glusterbot joined #gluster
14:12 Ashutto already enabled on one cluster
14:12 jiffin it's a volume specific option
14:12 Ashutto yes
14:12 jiffin so u need to enable it on volumes which u want to export
14:13 jiffin okay
14:13 Ashutto i mean.. one cluster has this enabled on all volumes
14:13 jiffin okay
14:13 glusterbot joined #gluster
14:13 Ashutto second one is in production, it will not be easy to enable it now
14:13 jiffin nfs-ganesha service is running on both nodes
14:13 jiffin right?
14:13 jiffin Ashutto: okay np
14:14 Ashutto correct
14:14 Ashutto up & running
14:14 Ashutto on both nodes
14:14 glusterbot joined #gluster
14:14 jiffin run the following
14:15 jiffin Ashutto: /usr/libexec/ganesha/create-export-ganesha.sh <path to shared storage>/nfs-ganesha on <volume name>
14:15 glusterbot joined #gluster
14:16 jiffin now a file will be created like <path to shared storage>/nfs-ganesha/export/export.<volume name>.conf
14:16 Ashutto what format do "volume name" have ?
14:16 jiffin the actual name of the volume
14:16 Ashutto ok
14:16 jiffin that's it
14:16 glusterbot joined #gluster
14:17 jiffin perform it only in one node
14:17 Ashutto no new files exists
14:17 glusterbot joined #gluster
14:17 jiffin sorry
14:18 jiffin <path to shared storage>/nfs-ganesha/exports/export.<volume name>.conf
14:18 Ashutto command exited with 0
14:18 jiffin oh
14:18 Ashutto find /run/gluster/shared_storage/nfs-ganesha/ -type f
14:18 glusterbot joined #gluster
14:19 jiffin is selinux enabled on ur setup?
14:19 Ashutto negative
14:20 glusterbot joined #gluster
14:20 Ashutto Sorry
14:20 Ashutto i forgot the "on"
14:20 jiffin Ashutto: no issues
14:20 Ashutto now a file exists
14:20 jiffin now edit following in that
14:21 jiffin check for hostname="localhost";
14:21 Ashutto ok. i have to change only "hostname" on the fsal section
14:21 Ashutto perfect
14:21 glusterbot joined #gluster
14:21 jiffin Yup
14:21 Ashutto done
14:22 jiffin Ashutto: now perform following in both nodes
14:22 Ashutto /usr/libexec/ganesha/ganesha-ha.sh --refresh-config ?
14:22 glusterbot joined #gluster
14:22 jiffin Ashutto: not yet
14:22 Ashutto ok
14:22 jiffin Ashutto: /usr/libexec/ganesha/dbus-send.sh <path to shared storage>/nfs-ganesha on <volume name>
14:23 jiffin on both nodes
14:23 Ashutto done
14:23 glusterbot joined #gluster
14:23 kramdoss_ joined #gluster
14:23 jiffin do showmount -e ?
14:24 jiffin whether it lists the volume?
14:24 Ashutto no :(
14:24 jiffin okay
14:25 jiffin Ashutto: can u check /var/log/ganesha.log and /var/log/ganesha-gfapi.log
14:25 glusterbot joined #gluster
14:25 Ashutto last info on ganesha.log is the glusterfs_create_export :FSAL :EVENT :Volume
14:26 Ashutto ok, unable to connect to the gluster server where i'm supposed to proxy from
14:26 Ashutto checking my side
14:26 glusterbot joined #gluster
14:26 jiffin is native fuse mount works?
14:26 jiffin on this node?
14:27 Ashutto yes, i'm serving content from that at the moment
14:27 jiffin with same hostname
14:27 Ashutto it seems a network problem
14:27 jiffin okay
14:27 Ashutto just a second
14:27 glusterbot joined #gluster
14:28 * jiffin need to leave in 15mins
14:28 glusterbot_ joined #gluster
14:29 jiffin Ashutto++ for trying ganesha in 3.10(even without proper documentation :))
14:29 glusterbot jiffin: Ashutto's karma is now 1
14:29 glusterbot` jiffin: Ashutto's karma is now 2
14:29 Ashutto :D
14:29 glusterbot joined #gluster
14:29 skylar joined #gluster
14:30 Ashutto connection is enabled now
14:30 jiffin okay
14:30 glusterbot joined #gluster
14:30 jiffin tryagain?
14:30 jiffin same command on both nodes
14:31 Ashutto yeah!
14:31 Ashutto it exports it!
14:31 jiffin Ashutto: there u go
14:31 MikeLupe2 joined #gluster
14:31 glusterbot joined #gluster
14:32 jiffin now one important thing to be noted
14:32 * Ashutto is listening carefully
14:32 jiffin since ganesha cluster is away from orginal clusters
14:33 jiffin u need to unexport the volume before volume stop
14:33 glusterbot joined #gluster
14:33 jiffin i mean before stopping it
14:33 Ashutto i think there will be NO volume stop in a veeeeeery long time
14:33 Ashutto like forever...
14:33 jiffin Ashutto:okay then. ur showmount -e lists volume
14:33 jiffin right?
14:34 Ashutto correct
14:34 jiffin Ashutto: best of luck with ur cluster
14:34 MikeLupe joined #gluster
14:34 glusterbot joined #gluster
14:34 Ashutto Thanks :)
14:34 Ashutto i'm going to test it now
14:34 jiffin Ashutto: bye
14:34 Ashutto i how it will function well :)
14:34 Ashutto thanks for your kindness
14:34 Ashutto best wishes
14:35 susant left #gluster
14:35 glusterbot joined #gluster
14:36 squizzi joined #gluster
14:36 glusterbot joined #gluster
14:38 glusterbot joined #gluster
14:39 glusterbot joined #gluster
14:39 Ashutto jiffin++ for helping with "live" documentation
14:39 glusterbot` Ashutto: jiffin's karma is now 5
14:39 glusterbot Ashutto: jiffin's karma is now 6
14:42 Seth_Kar_ joined #gluster
14:42 glusterbot joined #gluster
14:44 kramdoss_ joined #gluster
14:44 glusterbot joined #gluster
14:44 farhorizon joined #gluster
14:46 glusterbot joined #gluster
14:48 baber joined #gluster
14:48 glusterbot joined #gluster
14:49 buvanesh_kumar joined #gluster
14:49 glusterbot joined #gluster
14:52 glusterbot_ joined #gluster
14:53 shyam joined #gluster
14:54 glusterbot joined #gluster
14:56 glusterbot joined #gluster
14:57 level7 joined #gluster
14:57 vbellur joined #gluster
14:57 glusterbot joined #gluster
14:59 Seth_Karlo joined #gluster
14:59 glusterbot joined #gluster
14:59 ira joined #gluster
15:00 Seth_Kar_ joined #gluster
15:01 glusterbot joined #gluster
15:04 glusterbot joined #gluster
15:06 gyadav joined #gluster
15:06 glusterbot joined #gluster
15:07 skylar joined #gluster
15:08 glusterbot joined #gluster
15:09 Ashutto Hello again :D what would you monitor in a ganesha proxy ?
15:09 glusterbot joined #gluster
15:10 glusterbot joined #gluster
15:10 wushudoin joined #gluster
15:12 glusterbot joined #gluster
15:14 glusterbot joined #gluster
15:16 glusterbot joined #gluster
15:18 glusterbot joined #gluster
15:20 glusterbot joined #gluster
15:21 glusterbot joined #gluster
15:24 skylar joined #gluster
15:24 glusterbot joined #gluster
15:26 glusterbot joined #gluster
15:28 gyadav joined #gluster
15:28 glusterbot joined #gluster
15:29 glusterbot joined #gluster
15:30 glusterbot joined #gluster
15:31 sage_ joined #gluster
15:31 glusterbot joined #gluster
15:32 Jacob843 joined #gluster
15:34 glusterbot joined #gluster
15:36 glusterbot joined #gluster
15:38 glusterbot joined #gluster
15:39 ankitr joined #gluster
15:40 glusterbot joined #gluster
15:42 glusterbot_ joined #gluster
15:44 Seth_Karlo joined #gluster
15:44 glusterbot joined #gluster
15:44 shyam joined #gluster
15:46 glusterbot joined #gluster
15:46 baber joined #gluster
15:47 glusterbot joined #gluster
15:49 glusterbot joined #gluster
15:50 glusterbot joined #gluster
15:52 glusterbot joined #gluster
15:53 Seth_Karlo joined #gluster
15:54 mb_ joined #gluster
15:54 glusterbot joined #gluster
15:54 Seth_Kar_ joined #gluster
15:55 MrAbaddon joined #gluster
15:55 glusterbot joined #gluster
15:55 Seth_Kar_ joined #gluster
15:56 glusterbot joined #gluster
15:57 nimda_ joined #gluster
15:57 d0nn1e joined #gluster
15:57 glusterbot joined #gluster
16:00 nimda_ Hi, I have an odd problem with gluster . I have a 1x3 replicate. This command doesn't work: gluster volume heal images info heal-failed. It print this message : Gathering list of heal failed entries on volume images has been unsuccessful on bricks that are down. Please check if all brick processes are running.
16:00 glusterbot joined #gluster
16:00 nimda_ All the nodes and bricks are up
16:01 glusterbot joined #gluster
16:04 glusterbot joined #gluster
16:06 glusterbot joined #gluster
16:08 glusterbot joined #gluster
16:10 glusterbot joined #gluster
16:12 mk-fg Hm, says same thing for my setup here, even though all bricks are up as well ¯\_(ツ)_/¯
16:12 glusterbot joined #gluster
16:14 glusterbot joined #gluster
16:15 aravindavk joined #gluster
16:15 glusterbot joined #gluster
16:16 shyam joined #gluster
16:16 glusterbot joined #gluster
16:18 glusterbot joined #gluster
16:18 Tanner__ joined #gluster
16:20 baber joined #gluster
16:20 glusterbot joined #gluster
16:20 MrAbaddon joined #gluster
16:22 Tanner__ question: I have ~5TB of data on AWS volumes in ca-central-1a. I'm running a 4 node gluster cluster managed by heketi, 4TB on each node, in ca-central-1b. I need to load that ~5 TB onto a gluster volume. So far I just mounted the volumes on an instance and mounted the gluster volume, but it's only transferred ~120GB in 6-7 hours, which would give me a rough ETA of ~9.5 days till this is done
16:22 glusterbot joined #gluster
16:22 Tanner__ Is there a better way I could be doing this? There are a lot of small files on the drives (10-100KB)
16:23 Ashutto welcome in my nightmare, Tanner__ :(
16:23 glusterbot joined #gluster
16:23 Gambit15 joined #gluster
16:23 Ashutto what are you using to transfer your files?
16:24 glusterbot joined #gluster
16:26 glusterbot joined #gluster
16:28 glusterbot joined #gluster
16:30 Tanner__ Ashutto, right now just rsync
16:30 glusterbot joined #gluster
16:30 Ashutto ok, that's your limit
16:30 Tanner__ I was thinking of possibly doing a parallel rsync
16:30 Ashutto exactly
16:31 Ashutto you should divide and conquer
16:31 Tanner__ one thing I was wondering about is if it would be possible to mount the volume once for each node
16:31 Ashutto not useful
16:31 Tanner__ alright
16:31 Ashutto your limit is the tool (rsync) that is no use to your file size
16:31 glusterbot joined #gluster
16:31 Ashutto how many subdir levels?
16:32 Tanner__ ...lots
16:32 Ashutto how many files per dir?
16:32 Tanner__ not so many
16:33 Gambit15 Tanner__, if you've got lots of easily compressible files, you might benefit from gzipping the remote side with the --rsyncable flag
16:33 glusterbot joined #gluster
16:33 Gambit15 rsync'd then have less blocks to analyse
16:33 gyadav joined #gluster
16:34 glusterbot joined #gluster
16:34 Tanner__ interesting, I'd never seen that flag before
16:34 Ashutto Tanner: find $SRC -type d | parallel --retries 4 -j10 --progress "rsync --update --delete -ad --no-r --protect-args $SRC/{} $DST/{}"
16:35 Gambit15 Might be worth monitoring the interface to see whether rsync's taking ages to send the files, or whether it's transferring slowly
16:35 Ashutto where SRC and DST are your source and destination dir and "-j10" should be adapted to your parallel level (10 is used in a 1vcpu environment, for instance)
16:36 glusterbot joined #gluster
16:37 glusterbot joined #gluster
16:38 Tanner__ I'll try this out, thanks
16:39 level7_ joined #gluster
16:39 glusterbot joined #gluster
16:39 msvbhat joined #gluster
16:40 Tanner__ bandwidth over eth0 seems to hover between 2-6MB/s
16:40 glusterbot joined #gluster
16:41 Gambit15 Done an iperf between the servers?
16:41 Gambit15 And check load
16:42 Tanner__ load is very low, 1.0
16:42 glusterbot joined #gluster
16:42 Tanner__ haven't checked iperf but the machine with the data is on a 10GB ethernet and the gluster nodes are on 1GB
16:43 Tanner__ dd writing 4k blocksize was getting ~50MB/s between them
16:43 glusterbot joined #gluster
16:43 Gambit15 Just to make certain there's no bottleneck in the route to AWS
16:44 glusterbot joined #gluster
16:45 Tanner__ I've never used iperf before, which test should I run?
16:46 Gambit15 IIRC, it's just "iperf -s" to run the server on one side, and "iperf -c $IP" to run the client on the other side
16:46 glusterbot joined #gluster
16:46 Gambit15 Super simple
16:48 jiffin joined #gluster
16:48 glusterbot joined #gluster
16:49 msvbhat joined #gluster
16:49 glusterbot joined #gluster
16:49 Tanner__ ok, trying that, just need to open the security gorup
16:51 Tanner__ 88 MB/s
16:51 glusterbot joined #gluster
16:54 glusterbot joined #gluster
16:55 shyam joined #gluster
16:56 glusterbot joined #gluster
16:58 glusterbot joined #gluster
16:59 ankitr joined #gluster
17:00 glusterbot joined #gluster
17:02 glusterbot joined #gluster
17:04 glusterbot_ joined #gluster
17:07 glusterbot joined #gluster
17:07 sona joined #gluster
17:08 glusterbot joined #gluster
17:10 glusterbot joined #gluster
17:12 psy__ joined #gluster
17:12 psy__ Hi guys
17:12 glusterbot joined #gluster
17:12 psy__ I have one question, which bothers me these days
17:13 psy__ I have Centos5 servers running gluster 3.0.4 all with DHT
17:13 glusterbot joined #gluster
17:13 psy__ what is the recommended upgrade path which I should take ?
17:14 psy__ It looks like if I install some newer version of gluster and create the volumes again, the files are visible, but I am testing only on a small set of test files and not the entire cluster
17:14 psy__ what is the recommended way to upgrade to latest ?
17:14 glusterbot joined #gluster
17:16 baber joined #gluster
17:16 glusterbot joined #gluster
17:18 glusterbot joined #gluster
17:20 glusterbot joined #gluster
17:21 anbehl joined #gluster
17:22 glusterbot joined #gluster
17:25 glusterbot joined #gluster
17:27 glusterbot joined #gluster
17:29 glusterbot joined #gluster
17:30 major okay .. soo .. sleep was good ..
17:30 kramdoss_ joined #gluster
17:30 glusterbot joined #gluster
17:31 major time to cleanup my junk files and see about de-duping my code
17:33 glusterbot joined #gluster
17:33 major also .. de-dupping the crossover code between glusterd_btrfs_snapshot_remove() and glusterd_lvm_snapshot_remove() .. which are 90% copy/paste drivel at this point
17:33 major weeeeee
17:33 major and .. more coffee
17:35 glusterbot joined #gluster
17:35 major hurm .. I wonder..
17:35 major JoeJulian, so you saw that btrfs is working? :)
17:36 samikshan joined #gluster
17:36 glusterbot_ joined #gluster
17:36 level7 joined #gluster
17:38 glusterbot joined #gluster
17:38 major I just have some artifacts in the filesystem I need to cleanup and some basic code stuff .. I suppose I should send an email to the mailing list at some point .. but sort of want to cleanup the parts that I totally know are a mess before asking people to add to my existing list...
17:40 rafi joined #gluster
17:40 glusterbot joined #gluster
17:43 glusterbot joined #gluster
17:45 glusterbot joined #gluster
17:47 glusterbot joined #gluster
17:48 rastar joined #gluster
17:48 glusterbot joined #gluster
17:51 glusterbot joined #gluster
17:52 glusterbot joined #gluster
17:53 glusterbot joined #gluster
17:55 Tanner__ joined #gluster
17:55 glusterbot joined #gluster
17:57 glusterbot joined #gluster
17:59 glusterbot joined #gluster
18:01 glusterbot joined #gluster
18:03 glusterbot joined #gluster
18:03 skylar joined #gluster
18:05 glusterbot joined #gluster
18:05 Seth_Karlo joined #gluster
18:07 glusterbot joined #gluster
18:09 glusterbot joined #gluster
18:10 glusterbot_ joined #gluster
18:12 glusterbot joined #gluster
18:13 glusterbot joined #gluster
18:13 ahino joined #gluster
18:13 skylar joined #gluster
18:14 glusterbot joined #gluster
18:15 rastar joined #gluster
18:15 glusterbot joined #gluster
18:15 Seth_Karlo joined #gluster
18:16 Seth_Karlo joined #gluster
18:16 glusterbot joined #gluster
18:18 glusterbot joined #gluster
18:18 rafi joined #gluster
18:19 glusterbot joined #gluster
18:21 glusterbot joined #gluster
18:23 glusterbot joined #gluster
18:23 JoeJulian major: That's awesome!
18:25 glusterbot joined #gluster
18:25 JoeJulian wtf, glusterbot?
18:30 plarsen joined #gluster
18:34 ahino joined #gluster
18:39 glusterbot joined #gluster
18:40 rastar joined #gluster
18:42 ahino joined #gluster
18:55 raghu joined #gluster
18:58 farhorizon joined #gluster
19:22 pcammara joined #gluster
19:27 DV__ joined #gluster
19:30 pcammarata I have some questions about gluster with nfs-ganesha for HA http://lists.gluster.org/pipermail/gluster-users/2017-March/030277.html any help is much appreciated
19:30 glusterbot Title: [Gluster-users] NFS-Ganesha HA reboot (at lists.gluster.org)
19:31 cloph ask the actual question...otherwise noone can tell...
19:32 Jacob843 joined #gluster
19:39 pcammarata You would need to read on the mailing list link. Its somewhat long.
19:45 kkeithley cammarata: what you experienced is not normal. When your nodes reboot everything _should_ just come back.
19:47 sysanthrope joined #gluster
19:47 kkeithley If pacemaker and the resource agents are all running, and one node doesn't come back, the VIP for that node is supposed to move to the other, remaining node; you should not need to manually add it to the NIC.
19:49 mhulsman joined #gluster
19:50 anbehl__ joined #gluster
19:52 kkeithley pcammarata: ^^^
19:59 pcammarata When we rebooted we didnt stop any gluster services, just did shutdown -r now. Would that be a problem?
20:04 baber joined #gluster
20:06 kkeithley I think that should work.
20:15 mhulsman joined #gluster
20:21 jkroon joined #gluster
20:29 derjohn_mob joined #gluster
20:31 raghu joined #gluster
20:35 ahino joined #gluster
20:41 skylar joined #gluster
20:42 level7 joined #gluster
20:47 baber joined #gluster
20:47 Seth_Kar_ joined #gluster
20:49 rastar joined #gluster
21:09 MrAbaddon joined #gluster
21:11 vbellur1 joined #gluster
21:11 vbellur1 joined #gluster
21:12 vbellur1 joined #gluster
21:13 vbellur1 joined #gluster
21:13 vbellur1 joined #gluster
21:14 vbellur1 joined #gluster
21:23 shyam joined #gluster
21:53 bwerthmann joined #gluster
21:54 bwerthmann joined #gluster
22:04 pioto joined #gluster
22:50 vbellur joined #gluster
23:14 farhorizon joined #gluster
23:14 buvanesh_kumar joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary