Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 armyriad joined #gluster
00:44 plarsen joined #gluster
00:52 riyas joined #gluster
01:03 kenansulayman joined #gluster
01:04 saali joined #gluster
01:06 shdeng joined #gluster
01:13 baber joined #gluster
01:47 derjohn_mob joined #gluster
01:49 ilbot3 joined #gluster
01:49 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:16 om2 joined #gluster
02:23 om2 joined #gluster
02:28 kramdoss_ joined #gluster
03:03 skoduri joined #gluster
03:07 moneylotion joined #gluster
03:14 msvbhat joined #gluster
03:26 msvbhat joined #gluster
03:31 magrawal joined #gluster
03:33 Shu6h3ndu joined #gluster
03:37 amarts joined #gluster
03:45 Guest96675 joined #gluster
03:47 itisravi joined #gluster
03:51 riyas joined #gluster
03:58 gyadav__ joined #gluster
04:01 atinm joined #gluster
04:14 prasanth joined #gluster
04:30 buvanesh_kumar joined #gluster
04:31 Guest96675 joined #gluster
04:31 saali joined #gluster
04:32 poornima joined #gluster
05:01 kramdoss_ joined #gluster
05:11 nbalacha joined #gluster
05:12 ndarshan joined #gluster
05:13 skumar joined #gluster
05:17 ppai joined #gluster
05:18 sanoj joined #gluster
05:24 karthik_us joined #gluster
05:28 jiffin joined #gluster
05:57 hgowtham joined #gluster
05:58 ankitr joined #gluster
06:00 tjelinek joined #gluster
06:01 kramdoss_ joined #gluster
06:03 kdhananjay joined #gluster
06:04 amarts joined #gluster
06:05 ankitr joined #gluster
06:06 Karan joined #gluster
06:08 Saravanakmr joined #gluster
06:15 ashiq joined #gluster
06:18 mb_ joined #gluster
06:19 poornima joined #gluster
06:40 purpleidea joined #gluster
06:50 sanoj joined #gluster
06:50 kblin joined #gluster
06:51 rafi joined #gluster
07:14 skoduri joined #gluster
07:18 poornima_ joined #gluster
07:19 sona joined #gluster
07:24 amarts joined #gluster
07:33 fsimonce joined #gluster
07:36 mbukatov joined #gluster
07:36 aravindavk joined #gluster
07:38 jkroon joined #gluster
07:44 msvbhat joined #gluster
08:01 rastar joined #gluster
08:11 samppah joined #gluster
08:15 amarts joined #gluster
08:35 derjohn_mob joined #gluster
08:45 askz JoeJulian: the selfheal has been complete but it remains that there is still a 3GB difference, and du --apparent show tiny difference two, but it's really better than yesterday night
09:05 flying joined #gluster
09:07 flyingX joined #gluster
09:08 Saravanakmr joined #gluster
09:11 gyadav_ joined #gluster
09:13 ankitr joined #gluster
09:22 ahino joined #gluster
09:29 askz I have some difference between two bricks, the size is different, my clutser is two nodes, running debian jessie, and du --apparent is showing different sizes on the two bricks; here's some info http://termbin.com/oqee
09:30 askz the problem is web3 is showing different size between mount point and  brick and apparently missing some files (because of du --apparent not successful). any ideas folks?
09:30 askz ahw sorry for the termbin looking like shit I'm reposting it
09:30 skumar_ joined #gluster
09:32 askz here it is : https://gist.github.com/askz/47908f3a74e014b5502de0b6cbb7144a
09:32 glusterbot Title: gluster on web3 · GitHub (at gist.github.com)
09:33 gyadav__ joined #gluster
09:37 MrAbaddon joined #gluster
09:54 buvanesh_kumar joined #gluster
09:58 [diablo] joined #gluster
10:01 MrAbaddon joined #gluster
10:06 skoduri joined #gluster
10:07 skumar__ joined #gluster
10:28 tallmocha joined #gluster
10:44 amarts joined #gluster
10:54 tallmocha joined #gluster
10:54 skoduri joined #gluster
10:57 msvbhat joined #gluster
11:08 amarts joined #gluster
11:09 msvbhat joined #gluster
11:14 nbalacha joined #gluster
11:22 toredl joined #gluster
11:46 ahino joined #gluster
11:49 BitByteNybble110 joined #gluster
11:54 ankitr joined #gluster
11:54 bartden joined #gluster
12:04 plarsen joined #gluster
12:09 ingard__ i cant seem to find the documentation for the performance translators on readthedocs
12:10 ingard__ anyone know where they are?
12:12 ppai joined #gluster
12:24 bartden Hi, i can create files with the same name in the same folder on a gluster share. The setup is distributed (4 bricks). Whats wrong?
12:28 Saravanakmr joined #gluster
12:32 kpease joined #gluster
12:43 baber joined #gluster
12:45 plarsen joined #gluster
12:45 rafi joined #gluster
12:48 bartden One of the files contain the T permission at the end
12:48 Wizek_ joined #gluster
12:53 plarsen joined #gluster
12:54 tallmocha joined #gluster
12:55 amarts joined #gluster
12:55 jkroon_ joined #gluster
13:00 Saravanakmr joined #gluster
13:11 msvbhat joined #gluster
13:14 amarts joined #gluster
13:21 skylar joined #gluster
13:27 MrAbaddon joined #gluster
13:28 shyam joined #gluster
13:33 riyas joined #gluster
13:33 squizzi joined #gluster
13:42 ccha joined #gluster
13:49 ccha hello, I have a problem setting ganesha-ha.conf. hostnames are test-t1, test-t2 in the ganesha-ha.conf I try to set VIP_test-t1=10.2.1.1
13:49 ccha pcq status displays exitreason='IP address (the ip parameter) is mandatory'
13:51 ayaz joined #gluster
13:53 farhorizon joined #gluster
13:58 msvbhat joined #gluster
14:07 baber joined #gluster
14:08 msvbhat joined #gluster
14:12 flying joined #gluster
14:19 nbalacha joined #gluster
14:20 flying joined #gluster
14:21 kramdoss_ joined #gluster
14:22 kkeithley ccha: use VIP_TEST-t1="10.2.1.1"   (quotes are required)
14:25 kkeithley VIP_test-t1="10.2.1.1"
14:25 farhorizon joined #gluster
14:36 riyas joined #gluster
14:49 Humble joined #gluster
14:59 Humble joined #gluster
15:02 aravindavk joined #gluster
15:06 sanoj joined #gluster
15:07 wushudoin joined #gluster
15:07 wushudoin joined #gluster
15:12 vbellur joined #gluster
15:12 vbellur joined #gluster
15:13 vbellur joined #gluster
15:13 vbellur joined #gluster
15:14 vbellur joined #gluster
15:14 vbellur joined #gluster
15:16 flyingX joined #gluster
15:36 ccha kkeithley: no error
15:36 ccha but is it normal there only sylink for ganesha.conf and not for ganesha-ha.conf ?
15:36 ankitr joined #gluster
15:39 riyas joined #gluster
15:40 skoduri joined #gluster
15:57 bmurt joined #gluster
15:57 MrAbaddon joined #gluster
15:57 bmurt hey ya'll, i'm trying to do research on gluster's encryption in transit and at rest
15:58 bmurt if come across https://gluster.readthedocs.io/en/latest/Administrator%20Guide/SSL/ is there any other options?
15:58 glusterbot Title: SSL - Gluster Docs (at gluster.readthedocs.io)
16:10 mallorn I'm using GlusterFS as part of an OpenStack cluster (Kilo) that we're slowly upgrading.  I just read that Newton deprecates the gluster driver, and Ocata removes it entirely.  Does anyone here know if it's now a third-party plugin, or is just not an option any more?
16:13 JoeJulian bmurt: nope, only ssl.
16:15 JoeJulian mallorn: Not sure where that came from. https://docs.openstack.org/ocata/config-reference/tables/conf-changes/cinder.html shows no such deprecation and https://docs.openstack.org/ocata/config-reference/block-storage/drivers/glusterfs-driver.html shows current usage.
16:15 glusterbot Title: OpenStack Docs: New, updated, and deprecated options in Ocata for Block Storage (at docs.openstack.org)
16:17 bmurt ok @JoeJulian ty
16:18 mallorn I was looking at https://docs.openstack.org/admin-guide/blockstorage-glusterfs-backend.html : "The GlusterFS volume driver, which was deprecated in the Newton release, has been removed in the Ocata release."
16:18 glusterbot Title: OpenStack Docs: Configure a GlusterFS back end (at docs.openstack.org)
16:18 JoeJulian Oh, the blockdevice translator
16:18 mallorn Yes, sorry.  I wasn't specific enough.
16:19 mallorn I'm still in freakout mode after just reading that.  Brain needs to slow down.  :D
16:19 JoeJulian I hate openstack documentation sometimes.
16:20 JoeJulian It's the same as that second link I showed. They just use weird terminology on the link you provided.
16:23 mallorn OK.  Thanks!  :D
16:23 JoeJulian Don't get too excited
16:23 JoeJulian I'm perusing the source
16:26 JoeJulian wtf? seriously? Eric Harney of Red Hat did that?
16:28 misc did what ?
16:29 JoeJulian removed glusterfs support from openstack cinder
16:31 misc mhh, I think it was unmaintained
16:31 misc and it didn't got much traction from customers
16:31 JoeJulian right... which customers? I know a bunch of people that were using it, including myself.
16:32 misc RH ones
16:32 misc but that's unfortunate if you indeed were using it :/
16:32 misc (but maybe I am wrong too)
16:32 JoeJulian So fucking Red Hat has no business removing it from openstack just because there were no Red Hat customers using it.
16:32 JoeJulian Maybe ask the fucking gluster mailing list.
16:32 misc I think the main reason was more "not maintained"
16:33 misc but maybe there was something like putting it out of the core, or anything
16:33 misc I can ask to people
16:34 JoeJulian @kick joejulian language
16:34 glusterbot JoeJulian: Error: You don't have the #gluster,op capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
16:34 JoeJulian @kick joejulian language
16:34 JoeJulian was kicked by glusterbot: language
16:34 JoeJulian joined #gluster
16:35 JoeJulian I'm very angry and shall be afk for a bit doing $dayjob.
16:35 mallorn I'm sorry.
16:38 gyadav__ joined #gluster
16:39 mallorn Looking forward to the future iSCSI connectivity to gluster.
16:40 misc seems the solution is using nfs ganesha: https://ask.openstack.org/en/question/103495/ocata-upgrade-centos7-rdo-glusterfs38-no-module-named-glusterfs-solved/
16:40 glusterbot Title: ocata upgrade - CentOS7 / RDO / GlusterFS3.8 No module named glusterfs (solved) [closed] - Ask OpenStack: Q&A Site for OpenStack Users and Developers (at ask.openstack.org)
16:41 mallorn Unfortunately, the NFS driver for cinder doesn't support volume snapshotting.
16:42 mallorn That may have changed in ocata.  Checking.
16:43 * misc is still searching for why it was removed
16:53 misc so the support matrix say "no"
16:53 misc but there is a spec for mitaka: https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/nfs-snapshots.html
16:53 glusterbot Title: NFS Snapshots — cinder-specs 0.0.1.dev373 documentation (at specs.openstack.org)
16:54 misc and it was merged
16:54 misc https://review.openstack.org/#/c/133074/
16:54 glusterbot Title: Gerrit Code Review (at review.openstack.org)
17:01 mallorn I haven't played with NFS HA, so my only other concerns would be about traffic and mountpoints.  We like the distributed network traffic and failover that the FUSE clients give us, but if we can accomplish that with NFS then there's no reason not to switch.
17:03 rastar joined #gluster
17:07 mallorn We have 25 storage nodes with a 5 x (4 + 1) distributed-disperse set with about 1.4 petabytes available, backended into a ZFS raidz2 with about a 1.85x compression ratio.  We don't want to route all of that through a single mountpoint.  If we can do pNFS that would be awesome.
17:10 misc yeah, i got no answer
17:10 misc have you tried asking on #openstack-cinder ? (I was also unable to find much rational for the removal)
17:11 oajs joined #gluster
17:12 mallorn This was the first place that I asked, but I'll poke around some more.  I was adding some new storage nodes today and noticed it in the OpenStack docs about an hour ago.  I appreciate you looking into it!
17:15 kkeithley ccha: the symlink for ganesha-ha.conf will be created later
17:16 jkroon joined #gluster
17:24 rastar joined #gluster
17:25 kkeithley ccha: gluster will create it.
17:42 oajs joined #gluster
17:50 shyam joined #gluster
17:50 bmurt joined #gluster
17:51 MrAbaddon joined #gluster
17:56 Vapez_ joined #gluster
17:58 Asako I'm getting an error on a server I just upgraded to gluster 3.10
17:59 Asako [2017-04-28 17:57:27.241694] E [rpc-transport.c:283:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/3.10.1/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
17:59 Asako glusterd won't start
18:00 Asako volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
18:00 Asako all my other nodes are fine though
18:02 Asako I've also noticed that yum updates appear to be renaming my vol files
18:02 Asako there's a bunch of .rpmsave files
18:10 Asako [xlator.c:503:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
18:10 Asako also this
18:20 baber joined #gluster
18:20 shyam joined #gluster
18:38 Asako ok, I got gluster to start.  Now geo-replication is broken.
18:38 Asako [monitor(monitor):357:monitor] Monitor: worker(/var/mnt/gluster/brick2) died in startup phase
18:49 Asako I see geo-replication goes active for a second and then it's immediately faulty
18:49 farhorizon joined #gluster
18:55 baber joined #gluster
19:00 derjohn_mob joined #gluster
19:13 MrAbaddon joined #gluster
19:22 rastar joined #gluster
19:37 dataio joined #gluster
19:42 PatNarciso joined #gluster
19:49 kraynor5b_ joined #gluster
19:57 level7 joined #gluster
19:59 shyam joined #gluster
20:25 baber joined #gluster
20:26 Asako is there a way to force geo-replication to resync?
20:30 om2 joined #gluster
21:02 farhorizon joined #gluster
21:26 squizzi joined #gluster
21:58 plarsen joined #gluster
22:00 Klas joined #gluster
22:04 gyadav__ joined #gluster
22:16 vbellur joined #gluster
22:19 vbellur joined #gluster
22:19 vbellur joined #gluster
22:20 vbellur joined #gluster
22:21 vbellur1 joined #gluster
22:22 vbellur joined #gluster
22:31 level7_ joined #gluster
22:36 nirokato joined #gluster
22:55 Gambit15 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary