Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 major joined #gluster
00:25 serg_k joined #gluster
00:28 kramdoss_ joined #gluster
01:16 baber joined #gluster
01:18 shdeng joined #gluster
01:30 percevalbot joined #gluster
01:36 major seriously .. can we have a way to configure gluster such that the prefixes for stuff are somewhere reasonable? like gluster_statedir="/gluster" or something?
01:36 major as opposed to /var/run/gluster/
01:36 major would kinda like /gluster/{brick,snap,...}
01:37 kramdoss_ joined #gluster
01:46 ankitr joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 susant joined #gluster
01:57 farhorizon joined #gluster
02:01 major anyone know if there is any way to set/get option keys per-brick ?
02:02 major like .. there is a C interface for doing it .. but I don't see a CLI interface
02:08 * major sighs.
02:29 farhorizon joined #gluster
02:33 gyadav joined #gluster
03:03 prasanth joined #gluster
03:04 Gambit15 joined #gluster
03:10 Tanner__ joined #gluster
03:17 riyas joined #gluster
03:28 susant left #gluster
03:34 itisravi joined #gluster
03:40 major is it just me or did the entire discussion regarding the snapshot implementation details sort of just grind to a halt?
03:45 atinm joined #gluster
03:55 skumar joined #gluster
03:59 shdeng joined #gluster
04:03 shdeng joined #gluster
04:10 shdeng joined #gluster
04:11 ashiq joined #gluster
04:20 sona joined #gluster
04:23 gyadav joined #gluster
04:36 susant joined #gluster
04:38 rafi joined #gluster
04:41 kramdoss_ joined #gluster
04:43 susant left #gluster
04:46 buvanesh_kumar joined #gluster
04:47 Shu6h3ndu joined #gluster
04:52 kotreshhr joined #gluster
04:54 jiffin joined #gluster
05:01 kramdoss_ joined #gluster
05:05 ankitr joined #gluster
05:05 Prasad joined #gluster
05:05 skumar_ joined #gluster
05:07 kdhananjay joined #gluster
05:08 kdhananjay joined #gluster
05:13 skoduri joined #gluster
05:13 buvanesh_kumar joined #gluster
05:16 jiffin joined #gluster
05:20 skumar joined #gluster
05:27 amarts joined #gluster
05:27 apandey joined #gluster
05:28 jiffin joined #gluster
05:30 susant joined #gluster
05:30 Philambdo joined #gluster
05:33 apandey_ joined #gluster
05:36 jiffin joined #gluster
05:39 susant left #gluster
05:44 aravindavk joined #gluster
05:44 hgowtham joined #gluster
05:44 msvbhat joined #gluster
05:53 sanoj joined #gluster
05:54 nishanth joined #gluster
05:55 rastar joined #gluster
05:58 sona joined #gluster
05:59 msvbhat joined #gluster
06:01 kdhananjay joined #gluster
06:06 prasanth joined #gluster
06:07 sanoj joined #gluster
06:08 atrius joined #gluster
06:09 mbukatov joined #gluster
06:14 jiffin joined #gluster
06:15 kdhananjay joined #gluster
06:18 derjohn_mob joined #gluster
06:20 kdhananjay left #gluster
06:23 sbulage joined #gluster
06:24 jtux joined #gluster
06:30 Utoxin joined #gluster
06:41 shdeng joined #gluster
06:42 delhage joined #gluster
06:42 kdhananjay joined #gluster
06:42 ayaz joined #gluster
06:49 ankitr joined #gluster
07:03 jkroon joined #gluster
07:08 gyadav joined #gluster
07:15 Karan joined #gluster
07:15 hybrid512 joined #gluster
07:25 fsimonce joined #gluster
07:32 mb_ joined #gluster
07:33 msvbhat joined #gluster
07:36 skumar_ joined #gluster
07:39 devyani7_ joined #gluster
07:47 skumar joined #gluster
07:52 derjohn_mob joined #gluster
07:52 apandey__ joined #gluster
08:00 flying joined #gluster
08:13 jtux joined #gluster
08:13 Asako joined #gluster
08:33 MrAbaddon joined #gluster
08:57 R0ok_ joined #gluster
09:03 ankitr joined #gluster
09:06 ankitr joined #gluster
09:20 jtux left #gluster
09:25 kdhananjay joined #gluster
09:25 ankitr joined #gluster
09:29 Wizek_ joined #gluster
09:43 ankitr joined #gluster
09:44 [diablo] joined #gluster
09:46 [diablo] Morning #gluster
09:54 kotreshhr left #gluster
09:57 ayaz joined #gluster
10:01 kdhananjay joined #gluster
10:07 jtux joined #gluster
10:12 skumar joined #gluster
10:12 ankitr joined #gluster
10:18 morse joined #gluster
10:18 kraynor5b__ joined #gluster
10:21 ankitr joined #gluster
10:22 jtux left #gluster
10:40 ppai joined #gluster
10:43 ashiq joined #gluster
10:43 jwd joined #gluster
11:03 kdhananjay joined #gluster
11:05 ashiq joined #gluster
11:09 nh2 joined #gluster
11:23 kdhananjay joined #gluster
11:25 jiffin joined #gluster
11:33 itisravi joined #gluster
11:35 jiffin joined #gluster
11:35 rafi joined #gluster
11:38 apandey_ joined #gluster
11:43 arpu joined #gluster
12:02 skoduri joined #gluster
12:05 askz joined #gluster
12:08 askz hi, I have two webservers (@scaleway, C2) with nginx+php-fpm and would like to share the data volume between them without external storage servers.1) is it advised ?  2) is it a problem for future scaling-out possibilities ?
12:09 askz (with glusterfs ofc)
12:33 gyadav joined #gluster
12:39 baber joined #gluster
12:43 Philambdo joined #gluster
12:43 hgowtham joined #gluster
12:44 rafi joined #gluster
12:44 mb_ joined #gluster
12:45 msvbhat joined #gluster
12:53 kramdoss_ joined #gluster
12:54 unclemarc joined #gluster
13:01 kotreshhr joined #gluster
13:10 msvbhat joined #gluster
13:12 ira joined #gluster
13:16 atinm joined #gluster
13:20 nbalacha joined #gluster
13:20 Prasad joined #gluster
13:27 shyam joined #gluster
13:30 Philambdo joined #gluster
13:31 gyadav_ joined #gluster
13:33 skylar joined #gluster
13:35 rwheeler joined #gluster
13:37 plarsen joined #gluster
13:41 oajs joined #gluster
13:43 buvanesh_kumar joined #gluster
13:44 riyas joined #gluster
13:54 ankitr joined #gluster
13:55 derjohn_mob joined #gluster
13:59 nbalacha joined #gluster
14:02 squizzi joined #gluster
14:10 gyadav_ joined #gluster
14:24 askz hi, I have two webservers (@scaleway, C2) with nginx+php-fpm and would like to share the data volume between them without external storage servers.1) is it advised ?  2) is it a problem for future scaling-out possibilities ?
14:24 timotheus1_ joined #gluster
14:24 askz I mean having two replicas on those webservers and mounting the data on the same servers
14:25 baber joined #gluster
14:41 nathwill joined #gluster
14:44 farhorizon joined #gluster
14:57 bapxiv joined #gluster
14:58 bapxiv In Ubuntu 16.04, I have a gluster server that refuses to start gluster/nfs even after reboot.  Are there instructions on how to manually clean-up the gluster service(s) so it can start normally?
14:59 atinm joined #gluster
15:00 nirokato joined #gluster
15:07 oajs_ joined #gluster
15:08 wushudoin joined #gluster
15:12 msvbhat joined #gluster
15:12 kkeithley bapxiv: what version of gluster? Starting with 3.9 you have to explicitly enable gnfs on a volume
15:13 bapxiv kkeithley, 3.7
15:13 bapxiv kkeithley, I got it running by restarting rpcbind.
15:15 amarts joined #gluster
15:17 kkeithley you do know that 3.7 is not longer maintained, right?
15:22 askz damn. thanks for pointing that. debian jessie is still in 3.5 =.=
15:25 farhorizon joined #gluster
15:29 baber joined #gluster
15:29 mallorn We have a 3.10 cluster over 10GigE that's used to host VM images.  Three of our distributed-disperse sets are now in healing mode (11 to 45 files each) and have been hovering there for about a month now.  We have 64 threads on the self-heal daemon.  Any ideas?
15:36 askz is there special advices for upgrading from 3.5 to 3.10?
15:41 gyadav joined #gluster
15:43 skoduri joined #gluster
15:43 mallorn The healing process seems to be much slower ever since we upgraded to 3.10 from 3.7 and added multiple threads for the self-heal daemon.
16:00 farhorizon joined #gluster
16:07 vbellur joined #gluster
16:08 Tanner__ joined #gluster
16:09 ankitr joined #gluster
16:10 susant joined #gluster
16:11 susant left #gluster
16:13 buvanesh_kumar joined #gluster
16:14 FreezeS joined #gluster
16:15 timotheus1 joined #gluster
16:15 FreezeS hi guys
16:16 FreezeS I'm trying to set up geo-replication and the sshkey script fails
16:16 FreezeS Commit failed on selimbar. Error: Unable to end. Error : Success\n', 'gluster system:: execute georep-sshkey.py node-generate no-prefix
16:16 FreezeS host "selimbar" is directly accessible through ssh, without password
16:18 FreezeS if I try to ignore that and just create the geo-replication, I get this:
16:18 FreezeS gluster> volume geo-replication videos selimbar::videos_repl create push-pem
16:18 FreezeS selimbar not reachable.
16:21 squizzi joined #gluster
16:23 cloph and selimbar resolves to a valid IP?
16:25 vbellur joined #gluster
16:26 vbellur joined #gluster
16:26 FreezeS cloph: yes, it's in hosts
16:26 vbellur joined #gluster
16:27 FreezeS root@cen-zm1:/var/log/glusterfs# ping selimbar
16:27 FreezeS PING selimbar (192.168.100.122) 56(84) bytes of data.
16:27 FreezeS 64 bytes from selimbar (192.168.100.122): icmp_seq=1 ttl=62 time=57.1 ms
16:27 vbellur joined #gluster
16:27 vbellur joined #gluster
16:28 vbellur joined #gluster
16:29 FreezeS both hosts are in a VPN, no firewall between them yet
16:33 bapxiv Okay, so gluster/nfs is running but now I can't mount the gluster as it just complains about not finding the volume file.  "failed to get the 'volume file' from server"
16:37 bapxiv My mount command is mount -t glusterfs server:/vol-name /mnt/vol-name
16:40 rafi joined #gluster
16:41 rastar joined #gluster
16:44 prasanth joined #gluster
16:45 rafi joined #gluster
16:50 kotreshhr left #gluster
16:52 Tanner__ FreezeS, any luck? I just hopped on to ask about the exact same thing
16:53 FreezeS Tanner_: no, I'm struggling with this for 2 days
16:53 FreezeS I tried port forwards initially but that's a mess
16:53 Tanner__ :S the documentation is extremely lacking
16:53 FreezeS now trying on VPN to eliminate any conectivity issues
16:54 FreezeS it would be usefull to at least give troubleshooting instructions
16:57 gyadav joined #gluster
16:57 Tanner__ FreezeS, I will be working on this today, if I get anywhere I will let you know. These links look helpful: https://www.jamescoyle.net/how-to/1037-synchronise-a-glusterfs-volume-to-a-remote-site-using-geo-replication https://www.digitalocean.com/community/questions/glusterfs-geo-replication
16:57 glusterbot Title: Synchronise a GlusterFS volume to a remote site using geo replication – JamesCoyle.net (at www.jamescoyle.net)
16:58 kotreshhr1 joined #gluster
16:58 kotreshhr1 left #gluster
17:00 FreezeS Tanner_: I have the secret.pem.pub in the authorized_hosts file but it's exactly the same
17:04 bapxiv I'm trying to mount a remote glusterfs drive with "mount -t glusterfs server_name:vol-name /mnt/vol-name" and getting the following error: "mgmt: failed to fetch volume file (key:vol-name)".  This worked until I upgraded the OS to Ubuntu 16.04.  I'm using GlusterFS 3.10
17:05 FreezeS Tanner_: now I put the secret.pub in .ssh/config and ssh is using it to connect, however glusterfs throws the same error
17:08 FreezeS Tanner_: progress!! I am using a non standard ssh port which is set up in the /etc/ssh/ssh_config and sshd_config, but it seems it needs to be specified in glusterfs
17:08 FreezeS however, now I have a different error
17:08 FreezeS The hook-script (/var/lib/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh) required for push-pem is not present. Please install the hook-script and retry
17:09 FreezeS could be because I deleted /var/lib/glusterd at a certain point in frustration :)
17:09 FreezeS any idea how to regenerate it ?
17:12 FreezeS copied it from github :)
17:13 farhoriz_ joined #gluster
17:13 pioto joined #gluster
17:14 FreezeS damn, this non-standard port is really messing me up
17:15 FreezeS Popen: ssh> ssh: connect to host selimbar port 22: Connection refused
17:16 FreezeS shouldn't the port specified when creating the geo-replication be kept somewhere? This is from the logs: Monitor: starting gsyncd worker(/bricks/video1/video2). Slave node: ssh://root@selimbar:gluster://localhost:videos_repl
17:18 Tanner__ FreezeS, check /var/lib/glusterd/geo-replication/gsyncd_template.conf
17:18 Tanner__ looks like you can set ssh command
17:18 Tanner__ try adding port to that
17:19 FreezeS I added an additional port to ssh (22) and now I get another nice error
17:20 FreezeS Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory
17:20 baber joined #gluster
17:20 Tanner__ /nonexistant is sometimes set as the shell on users
17:21 Shu6h3ndu joined #gluster
17:22 FreezeS root has /bin/bash in passwd
17:22 Tanner__ oh
17:22 Tanner__ look in that conf file
17:23 Tanner__ line 28
17:24 FreezeS changed it on the master
17:24 FreezeS let's try to restart it
17:24 FreezeS same error
17:26 FreezeS damn, how do I reload the setting?
17:26 Tanner__ https://www.jamescoyle.net/how-to/1037-synchronise-a-glusterfs-volume-to-a-remote-site-using-geo-replication
17:26 glusterbot Title: Synchronise a GlusterFS volume to a remote site using geo replication – JamesCoyle.net (at www.jamescoyle.net)
17:27 Tanner__ ctrl f Sometimes on the remote machine, gsyncd
17:32 pioto joined #gluster
17:32 FreezeS had to specify config remote-gsyncd after the geo-rep was created
17:32 FreezeS but now it's finally working
17:34 mallorn I have a 3GB file that I'm trying to heal.  All clients accessing that file have stopped.  The file is on a 5 x (2 + 1) distributed-disperse set, and if I look at the bricks I can see the file growing on one of the storage nodes.  However, once it reaches the end it starts over at about 1.5GB and tries again.
17:35 mallorn I've watched it reset about six times so far.  I tried changing cluster.data-self-heal-algorithm to full to see if that would help, but it didn't.
17:37 bapxiv I'm trying to mount via glusterfs from a server and gluster is telling me it can't find the volume file.  I have regenerated the volume files.  Running Gluster 3.10
17:39 Tanner__ bapxiv, can you mount the v olume on the server it is hosted on
17:39 bapxiv Tanner__ let me try that real quick
17:40 Tanner__ you can just use lcoalhost:volume
17:41 bapxiv Tanner__ no, same error
17:41 bapxiv Mount -t glusterfs localhost:vol-name /mnt/vol-name
17:42 Tanner__ /var/log/glusterfs/mnt-<vol> may say something?
17:42 bapxiv Tanner__ Says it can't find the volume file
17:43 Tanner__ have you tried creating a new volume and mounting it?
17:43 bapxiv No
17:43 bapxiv I really only have this one drive
17:44 susant joined #gluster
17:45 bapxiv Tanner__ Can I make a volume from a folder temporarily?
17:46 major not enough hours in the day to do everything..
17:46 Tanner__ you could make a block device from a loopback device
17:46 Tanner__ and treat it as a drive
17:47 bapxiv Wait ... So bluster volume status shows "Status of volume: cleo-share" ... .but "cluster volume heal cleo-share info" shows: "Volume cleo-share does not exist"
17:47 bapxiv Tanner__ ^^
17:48 Tanner__ I really don't know bapxiv, I'm new to gluster myself
17:48 bapxiv lol
17:48 malevolent joined #gluster
17:48 xavih joined #gluster
17:48 bapxiv Okay, thanks for help Tanner__
17:49 Tanner__ FreezeS, did you have to setup ssh between the master nodes for the root account?
17:49 Tanner__ I have 4 nodes in 1 cluster, then a separate cluster with my slave
17:50 FreezeS Tanner_: there is only one master and one slave. I have a surveillance system and I need to keep a replication off-site
17:50 Tanner__ ah, that's kinda what I thought
17:50 Tanner__ I'm running in to problems because I have 3 peers to my master
17:51 FreezeS the replication is working, but the speed is very bad. This could be because the different ISPs are bad at communicating with each-other or limitation from ssh
17:52 major JoeJulian, just found out there is a PNW CephFS group
17:53 squizzi_ joined #gluster
17:54 major though .. curiously .. they don't look to be limited to ceph they have had presentations on pretty much every subject, including gluster
17:55 sona joined #gluster
18:05 kpease joined #gluster
18:06 bapxiv Tanner__ Forgot to stop the glusterfs service before regenerating the volume files
18:07 bapxiv Tanner__ Turns out, following the procedure laid out in the documentation will actually yield the results you want ... who knew? lol
18:12 msvbhat joined #gluster
18:17 jkroon joined #gluster
18:18 Philambdo joined #gluster
18:23 major I really need 10G ethernet for my desktop...
18:23 * major sighs.
18:23 major yes NewEgg .. you can send me more crap..
18:24 MrAbaddon joined #gluster
18:27 nathwill joined #gluster
18:37 bpxiv joined #gluster
18:40 msvbhat joined #gluster
18:48 farhorizon joined #gluster
18:48 Tanner__ is gluster geo-replication possible without using the root account, instead using sudo?
18:55 sona joined #gluster
18:55 rafi joined #gluster
19:00 susant left #gluster
19:08 melliott joined #gluster
19:11 wushudoin joined #gluster
19:25 farhoriz_ joined #gluster
19:30 toredl joined #gluster
19:47 Philambdo joined #gluster
19:51 arpu joined #gluster
20:13 squizzi_ joined #gluster
20:16 derjohn_mob joined #gluster
20:25 major okay .. I gave up and created a Vagrantfile specifically for testing just the snapshots w/ support for btrfs, lvm-xfs, and zfs
20:25 major now I can just 'vagrant up' and have vagrant run tests specifically tooled for testing snapshots
20:26 major and if it fails then I can go check it out
20:29 mb_ joined #gluster
20:34 mallorn Our volume heals are *really* slow and seem to be capped at 1GigE speeds (despite being on a 10GigE network).  We're running 3.10.0 with 64 self-heal threads.  Any ideas what we could look at?  iperf3 confirms 10GigE traffic works.
20:46 Asako left #gluster
21:07 squizzi_ joined #gluster
21:11 major soo .. who knew that it would be easier to provision the testing environment by simply removing ansible ;)
21:13 Tanner__ joined #gluster
21:34 mallorn Looks like our volume heal stuff *might* be tied to locking.  Setting features.locks-revocation-clear-all and then setting a value for features.locks-revocation-max-blocked speeds some things up.
21:35 mallorn zpool iostat dpool 1 is showing over a consistent 100% increase in data writes on the healing server after that change.
21:36 major hmm
21:45 mallorn We're getting 2.45Gb/s into the box now instead of 1Gb/s as well.
21:46 major nice
21:48 mallorn Some of these volumes have been healing for almost a month now, so it's nice to see progress finally.  Just have to figure out what's going on with the locking.
22:00 farhorizon joined #gluster
22:03 major When waiting for healing isn't working, there is always whiskey
22:13 arpu joined #gluster
22:29 jdossey joined #gluster
23:16 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary