Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 map1541 joined #gluster
00:12 lkoranda joined #gluster
00:26 MrAbaddon joined #gluster
01:04 gbox protoporpoise: nice job!  What do you mean by "tiered caching"?
01:08 protoporpoise just popping out to lunch gbox - will respond in 45 or so!
01:15 wushudoin joined #gluster
01:32 baber joined #gluster
01:32 map1541 joined #gluster
01:45 protoporpoise gbox: most of the slide context was spoken tbh, I don't know if there's a recording available yet, anyway - tiered caching - what we're doing is providing more cache to say production volumes than we do to say staging / non-prod or less important volumes
02:02 msvbhat joined #gluster
02:18 gbox protoporpoise: thanks, just wondering if you used something like memcached or actually used gluster tiering.  Nobody seems to use gluster tiering!
02:33 skoduri joined #gluster
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:02 gyadav joined #gluster
03:03 protoporpoise We used to use memcached
03:03 protoporpoise it was ok for the old way of hosting etc...
03:03 protoporpoise but it doesn't cluster
03:03 protoporpoise and the projects that crowbar it into a clustered world aren't very good tbh
03:03 protoporpoise we use Nginx as our front end load balancers and for caching, and Cloudflare as a CDN where possible
03:19 nbalacha joined #gluster
03:30 nishanth joined #gluster
03:31 MrAbaddon joined #gluster
03:37 gbox protoporpoise: yeah all good.  I'm going to try a replica 3 SSD tier and see how it goes.  The files going in/out are big (MB) so it could work well.  Still seems a bit experimental.  I'd like to see more options than sqlite3 and SELinux support, which will probably happen sometime.
03:46 sanoj joined #gluster
03:51 susant joined #gluster
03:51 DV joined #gluster
03:52 protoporpoise Ah yes, we run all our gluster nodes with SELinux enforcing
03:52 protoporpoise haven't found any problems TBH
03:52 protoporpoise our SANs are 100% SSD however our gluster nodes are VMs running off those SANs
03:54 protoporpoise out of date / old but fyi - https://smcleod.net/tech/ssd-storage-cluster-diagram/
03:54 glusterbot Title: smcleod.net | SSD Storage Cluster - Update and Diagram (at smcleod.net)
03:54 protoporpoise anyhow, I have to run have a great week all :)
03:56 msvbhat joined #gluster
03:57 vbellur joined #gluster
03:57 rwheeler joined #gluster
03:59 itisravi joined #gluster
04:04 psony joined #gluster
04:06 kramdoss_ joined #gluster
04:08 gbox hey thanks, that's very useful info.  The documentation really confuses about selinux support.  I'd been disabling it.  Now I'll enforce!
04:14 skumar joined #gluster
04:31 jiffin joined #gluster
04:34 jiffin joined #gluster
04:39 squeakyneb joined #gluster
04:44 hgowtham joined #gluster
05:00 Prasad joined #gluster
05:14 ppai joined #gluster
05:18 atinm joined #gluster
05:23 ronrib joined #gluster
05:27 int-0x21 Hmm i tried replacing some bricks (moving from zfs to single disk for testing on arbiter) but the new bricks arnt starting
05:27 int-0x21 Brick ghost3:/bricks/hahs/brick1            N/A       N/A        N       N/A
05:32 prasanth joined #gluster
05:33 apandey joined #gluster
05:35 nishanth joined #gluster
05:44 prasanth joined #gluster
05:44 karthik_us joined #gluster
05:45 kdhananjay joined #gluster
05:48 atinm joined #gluster
06:09 int-0x21 Hmm seems gluster started before rdma
06:14 int-0x21 Now all bricks are online in "gluster volume status" but "gluster volume heal info" still say Status: Transport endpoint is not connected on the replaced bricks
06:15 poornima joined #gluster
06:19 kramdoss_ joined #gluster
06:23 xavih joined #gluster
06:30 jtux joined #gluster
06:47 ilbot3 joined #gluster
06:47 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
06:53 Saravanakmr joined #gluster
06:59 Humble joined #gluster
07:04 apandey joined #gluster
07:09 ronrib joined #gluster
07:24 karthik_us joined #gluster
07:46 ivan_rossi left #gluster
07:48 kramdoss_ joined #gluster
07:48 poornima joined #gluster
07:49 jkroon_ joined #gluster
07:50 int_0x21 Little question does gluster work on freebsd ?
07:52 int-0x21 joined #gluster
07:55 karthik_us joined #gluster
07:56 BlackoutWNCT joined #gluster
07:58 ThHirsch joined #gluster
08:03 BitByteNybble110 joined #gluster
08:06 skoduri joined #gluster
08:07 shortdudey123 joined #gluster
08:09 eryc joined #gluster
08:09 eryc joined #gluster
08:12 itisravi joined #gluster
08:33 Prasad_ joined #gluster
08:34 atinm joined #gluster
08:34 tru_tru joined #gluster
08:37 aravindavk joined #gluster
08:39 Prasad__ joined #gluster
08:39 tru_tru joined #gluster
08:40 psony joined #gluster
08:40 Klas https://wiki.freebsd.org/GlusterFS
08:40 glusterbot Title: GlusterFS - FreeBSD Wiki (at wiki.freebsd.org)
08:41 Klas first hit on google for glusterfs freebsd =P
08:54 nobody482 joined #gluster
08:56 ahino joined #gluster
08:57 Klas int-0x21: ^
09:00 kramdoss_ joined #gluster
09:01 gospod2 joined #gluster
09:03 [diablo] joined #gluster
09:17 sanoj joined #gluster
09:19 atinm joined #gluster
09:21 _KaszpiR_ joined #gluster
09:39 ws2k3 joined #gluster
09:48 kramdoss_ joined #gluster
09:54 jarbod_ joined #gluster
10:01 misc joined #gluster
10:07 misc joined #gluster
10:11 sanoj joined #gluster
10:23 MrAbaddon joined #gluster
10:28 sanoj joined #gluster
10:35 Humble joined #gluster
11:24 atinm joined #gluster
11:31 gyadav joined #gluster
11:59 msvbhat joined #gluster
12:00 int_0x21 Thank you :) yes i saw some information but not alot so i didnt know if it was relevant :)
12:10 susant joined #gluster
12:18 nbalacha joined #gluster
12:27 DV joined #gluster
12:36 boutcheee520 joined #gluster
12:55 atinm joined #gluster
12:59 skumar joined #gluster
13:04 phlogistonjohn joined #gluster
13:05 DV joined #gluster
13:13 vishnuk joined #gluster
13:25 msvbhat joined #gluster
13:30 gyadav joined #gluster
13:43 nobody482 joined #gluster
13:49 jkroon__ joined #gluster
14:00 dominicpg joined #gluster
14:01 shyam joined #gluster
14:14 boutcheee520 joined #gluster
14:17 plarsen joined #gluster
14:30 plarsen joined #gluster
14:37 bowhunter joined #gluster
14:39 aravindavk joined #gluster
14:41 pladd joined #gluster
14:41 psony joined #gluster
14:42 phlogistonjohn joined #gluster
14:42 Saravanakmr joined #gluster
14:43 hmamtora joined #gluster
14:43 hmamtora_ joined #gluster
14:49 farhorizon joined #gluster
14:52 _dist joined #gluster
14:52 _dist joined #gluster
14:58 gyadav joined #gluster
15:01 skylar1 joined #gluster
15:04 rwheeler joined #gluster
15:05 _KaszpiR_ joined #gluster
15:21 tacoboy joined #gluster
15:25 _KaszpiR_ joined #gluster
15:28 javi404 joined #gluster
15:40 pmden joined #gluster
15:40 skylar1 joined #gluster
15:48 timotheus1_ joined #gluster
16:03 me joined #gluster
16:03 wushudoin joined #gluster
16:05 DV joined #gluster
16:13 Guest25783 I looking to set up a gluster cluster of 3 nodes, 2 in the same location and the 3rd at a remote site
16:14 Guest25783 replication between all 3 nodes but the 3rd node replicates file1 to node 2 and then file2 to node 1
16:15 Guest25783 node1 replicates fileA to node 2
16:16 Guest25783 is that possible
16:16 skumar joined #gluster
16:18 skylar1 joined #gluster
16:20 om2 joined #gluster
16:26 kpease joined #gluster
16:27 kpease_ joined #gluster
16:34 vbellur joined #gluster
16:37 int-0x21 joined #gluster
16:40 int-0x21 joined #gluster
16:42 buvanesh_kumar joined #gluster
16:43 int-0x21 joined #gluster
16:57 gyadav joined #gluster
17:07 Humble joined #gluster
17:07 int-0x21 joined #gluster
17:10 int-0x21 joined #gluster
17:13 JoeJulian Guest25783: So glusterfs is a clustered filesystem. A replicated volume is one in which the client writes to all the replicas synchronously. There is an async unidirectional process to sync data to a remote site (georeplication) but that doesn't sound like what you're looking for.
17:16 Guest25783 @JoeJulian no it not I would A to replicate to B but C to be able to replicate to A or B
17:18 JoeJulian Sounds like a job for rsync.
17:19 int-0x21 joined #gluster
17:22 int-0x21 joined #gluster
17:24 Guest25783 so if A replicates to B and the cluster size is 300TB and then I later add node C with is 300TB, so I want the cluster to be A+B(300TB)+C(300TB) but a total of 600TB. so hardware close to C write to C but that data can be replicated (sync'd) to A or B - only 2 copies exist instead of 3 copies.
17:25 JoeJulian That sounds more like swift.
17:25 int-0x21 joined #gluster
17:25 JoeJulian posix filesystems are a lot more complicated
17:25 Guest25783 @JoeJulian swift with Gluster
17:25 JoeJulian No, just plain swift.
17:26 JoeJulian Then you can set all kinds of rules for per-object replication.
17:26 JoeJulian and it's async.
17:27 Guest25783 openstack
17:40 int-0x21 joined #gluster
17:41 cholcombe what is the unlink folder in .glusterfs?
17:42 cholcombe i assume stuff is going in there because i had a problem where the bricks don't agree on unlink?
17:44 JoeJulian I believe that's used if an unlink happens while a brick is down so it can play it back correctly.
17:52 farhorizon joined #gluster
18:00 skylar1 joined #gluster
18:06 DV joined #gluster
18:07 int-0x21 joined #gluster
18:10 int-0x21 joined #gluster
18:10 rastar joined #gluster
18:13 int-0x21 joined #gluster
18:16 int-0x21 joined #gluster
18:23 cholcombe JoeJulian: cool ok
18:24 cholcombe JoeJulian: have you seen on recent releases of gluster that the json perf dump seems to be appending now instead of overwriting the files?  It blew up my disk yesterday and I had to turn it off
18:24 cholcombe i'm probably going to try and hunt down which release broke it but i don't have time right now
18:26 JoeJulian Ugh, no. I haven't been using that so I didn't know (and nobody else has complained yet).
18:37 cholcombe that's surprising.  I downloaded 3.12, turned it on and it filled up my disk in minutes
18:47 cholcombe JoeJulian: i found it.  someone changed the stats file fopen from w -> w+
18:49 JoeJulian It kind-of makes sense to do it that way, I guess. Better yet would be some circular queue.
18:49 JoeJulian I assume you're going to file a bug?
18:49 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:53 phlogistonjohn joined #gluster
18:54 int-0x21 Does anyone use rdma in their setup ?
18:55 int-0x21 Also sharding shouldnt that increase perforance a bit ?
18:56 cholcombe JoeJulian: this broke it: https://github.com/gluster/glusterfs/commit/fc73ae5f81ef5926e3dc2311db116250d0f2a321
18:56 glusterbot Title: debug/io-stats: Append stats for each interval in the same file · gluster/glusterfs@fc73ae5 · GitHub (at github.com)
18:56 cholcombe yeah i need to file a bug
18:56 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:57 JoeJulian int-0x21: There are people that use rdma, yes, I'm unfortunately not one of them - but I've seen several success stories over the years I've hung out in here.
18:58 int-0x21 getting alot of these cma event RDMA_CM_EVENT_ADDR_ERROR, error -2
18:58 int-0x21 In glustershd.log
18:59 int-0x21 I have functionalty but i cant realy find what that meens
18:59 JoeJulian Sharding could increase performance if your appilcation can take advantage of that. It would need to predicatively request the data from the next shard, so not really sure how that's any more performant.
18:59 JoeJulian It could help spread the load more evenly.
19:00 JoeJulian It's way more efficient for large files with erasure-coding.
19:00 int-0x21 can you do erasure coding and replicate ?
19:01 int-0x21 i thought erasure-coding requierd quite alot of hosts
19:01 int-0x21 I might have missunderstood something , alot of information not as much time so might have scrolled past that shapter to quickly :)
19:03 MrAbaddon joined #gluster
19:04 JoeJulian erasure coding can be enabled to monitor for bit-rot.
19:05 int-0x21 Il read up on it, the way its setup now is 4 bricks on each host, 3 hosts where one of them is only arbiter
19:09 _KaszpiR_ joined #gluster
19:12 cholcombe JoeJulian: https://bugzilla.redhat.com/show_bug.cgi?id=1513692 :)
19:12 glusterbot Bug 1513692: urgent, unspecified, ---, bugs, NEW , io-stats appends now instead of overwriting which floods filesystem with logs
19:17 ahino joined #gluster
19:18 baber joined #gluster
19:21 gyadav joined #gluster
19:26 skylar1 joined #gluster
19:45 int-0x21 With erasure encoding i need more then two hosts to avoid split brain or am i missunderstanding ?
19:49 int-0x21 Well ofc i need that anyway ;) but there isnt a arbiter "trick" to be used with it
19:58 JoeJulian No trick. If the checksum fails, it's a damaged file.
20:14 int-0x21 yea i ment as a way of avoiding split brain without a extra set of disks
20:16 int-0x21 If i do ec with 2 hosts first host goes down, second still operate then first goes down and a little later second comes up il be in a world of pain
20:16 int-0x21 or can i do ec  replicate 3 arbiter 1 ?
20:16 int-0x21 Sorry my brain is tierd tonight so i might be stupid here
20:20 gospod2 joined #gluster
20:24 JoeJulian int-0x21: yes, you can.
20:25 int-0x21 ohh so in effect i can setup a replicated raid 5 + arbiter
20:25 int-0x21 Cool i did not know that
20:25 int-0x21 That sounds alot better then my sharding
20:26 JoeJulian well, no
20:26 JoeJulian EC != RAID.
20:26 JoeJulian EC calculations are _used_ by raid
20:26 int-0x21 I lacked a better way to convey the thought
20:27 int-0x21 host1 : 4+1 host2: 4+1 host3: single disk for arbiter
20:28 JoeJulian Sorry, that's got me even more confused.
20:28 int-0x21 Hehe :)
20:29 JoeJulian The EC checksum is stored in the extended attributes. It's only used for determining if bitrot has occurred (afaik).
20:29 JoeJulian You _can_ have a volume that's raid-like but no arbiter.
20:30 int-0x21 Ahh i have gotten confused from some text
20:30 JoeJulian Unless I'm wrong. It happened once before. It's rare though. I don't usually make mistrakes.
20:31 int-0x21 i dont se how i can trade away the arbiter without setting myself in a serious risk of split brain
20:31 JoeJulian Only by splitting your storage among all three machines - I suppose.
20:35 int-0x21 Yea no nvme slots at the thierd host :)
20:35 int-0x21 Il probably expand the storage later on but not atm
20:35 int-0x21 Want to keep the costs down a bit untill its proven in production
20:37 JoeJulian oh, doesn't nvme do parity internally? Shouldn't even need EC. And so yeah, a replica 3 arbiter 1 volume is what you'll do.
20:38 int-0x21 yea i thought i  could use ec to get rid of the sharding
20:38 int-0x21 sharding for some reason takes down performance by 10% or so for me but without it big vm files will be a issue
20:39 JoeJulian No, sharding's something you would even more want if you're using EC. The checksum calculations on large files is time-consuming. The smaller shards get changed less frequently and require less recalcs.
20:39 int-0x21 ahh ok :)
20:40 JoeJulian Would be interesting to know why you're getting reduced performance. You shouldn't.
20:40 JoeJulian Sharded vm images will generally only have a very few shards that are actively changed.
20:41 int-0x21 Yea might be something with the fio workload that does it, i dunno i will test more with proper workload
20:42 int-0x21 but so far xfs bricks is performing alot better then zfs
20:42 int-0x21 rdma increased the performance quite a bit to
20:42 int-0x21 so now need to get the nfs up again (before when i tested the nfs ganehsa i did it on fedora that had nice scripts for it, centos dosnt)
20:47 wushudoin joined #gluster
20:57 xavih joined #gluster
20:58 int-0x21 I just love errors like "1 validation error in block" ..... great
21:00 tacoboy joined #gluster
21:11 int-0x21 Yea i think its it for tonight, to many errors that i dont understnad
21:11 int-0x21 Status: Transport endpoint is not connected , when brick is upp and running
21:11 int-0x21 on heal info
21:12 int-0x21 nfs not starting, im probably just doing something stupid atm
21:12 int-0x21 Nighty nighty
21:16 PatNarciso_ joined #gluster
21:16 * PatNarciso_ does not like when 'ps' returns io error.
21:26 marc_888 joined #gluster
21:29 marc_888 Hi, is anyone here who can help me, i am running a replicated GlusterFS cluster on Ubuntu14 (Openstack), i had until now 3.7.20 and i thouth that maybe it was a bug but today i update to 3.10.7 version and is the same issue :(
21:30 marc_888 My log is full with ```[2017-11-15 21:10:30.166724] I [rpc-clnt.c:2000:rpc_clnt_reconfig] 0-gluster_volume-client-4: changing port to 49153 (from 0)
21:30 marc_888 [2017-11-15 21:10:30.166935] E [socket.c:3230:socket_connect] 0-gluster_volume-client-4: connection attempt on 127.0.1.1:24007 failed, (Invalid argument)```
21:31 marc_888 The cluster worked/works fine for over an year but (i assume) the healing is not working...
21:33 gospod2 joined #gluster
21:34 marc_888 I am trying to migrate half the cluster to other datacenter and after i added a new brick the brick is not synching and the heal command is not working...
21:34 marc_888 # gluster volume heal gluster_volume full
21:34 marc_888 Launching heal operation to perform full self heal on volume gluster_volume has been unsuccessful on bricks that are down. Please check if all brick processes are running.
21:45 marc_888_mobile joined #gluster
21:50 marc_888 joined #gluster
21:56 marc_888 joined #gluster
22:00 side_control joined #gluster
22:07 JoeJulian marc_888: Are you sure glusterd is running on all the servers?
22:07 marc_888 For over an year
22:08 marc_888 I knew about the log erros ... but now when i wanted to do the migration i realised that the healing is not working
22:08 JoeJulian netstat -tlnp | grep 24007 (on brick 5)
22:09 JoeJulian hmm
22:09 marc_888 tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      939/glusterd
22:10 marc_888 I did the deployment with Ansible module...so when i created a replicated cluster it works fine, replication works. Now if i want to add a new brick/server then remove the old one .. the new brick does not get replicated.
22:11 JoeJulian Sure
22:11 marc_888 Adding and removing manually not with Ansible.
22:11 JoeJulian How about what's gluster volume status show?
22:12 map1541 joined #gluster
22:14 marc_888 Everything fine
22:14 marc_888 # gluster volume status
22:14 marc_888 Status of volume: gluster_volume
22:14 marc_888 Gluster process                             TCP Port  RDMA Port  Online  Pid
22:14 marc_888 ------------------------------------------------------------------------------
22:14 marc_888 Brick 2-gls-dus10-ci-efood-real-de.openstac
22:14 glusterbot marc_888: ----------------------------------------------------------------------------'s karma is now -24
22:14 marc_888 k.local:/export_vdb                         49153     0          Y       1526
22:14 marc_888 Brick 1-gls-dus10-ci-efood-real-de.openstac
22:14 marc_888 k.local:/export_vdb                         49153     0          Y       17047
22:14 marc_888 Brick 1-gls-dus21-ci-efood-real-de.openstac
22:14 marc_888 klocal:/export_vdb                          49152     0          Y       1393
22:14 marc_888 Brick 2-gls-dus21-ci-efood-real-de.openstac
22:14 marc_888 klocal:/export_vdb                          49152     0          Y       2923
22:14 marc_888 Brick 3-gls-dus10-ci-efood-real-de.openstac
22:14 marc_888 k.local:/export_vdb                         49153     0          Y       27850
22:14 marc_888 Self-heal Daemon on localhost               N/A       N/A        Y       1735
22:14 marc_888 Self-heal Daemon on 2-gls-dus10-ci-efood-re
22:14 marc_888 al-de.openstack.local                       N/A       N/A        Y       5643
22:14 marc_888 Self-heal Daemon on 1-gls-dus21-ci-efood-re
22:14 marc_888 al-de.openstacklocal                        N/A       N/A        Y       6811
22:14 marc_888 Self-heal Daemon on 2-gls-dus21-ci-efood-re
22:14 JoeJulian Please use a ,,(paste) service when sharing more than 3 lines to irc channels.
22:14 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
22:14 marc_888 al-de.openstacklocal                        N/A       N/A        Y       7411
22:14 marc_888 Self-heal Daemon on 1-gls-dus10-ci-efood-re
22:14 marc_888 al-de.openstack.local                       N/A       N/A        Y       26505
22:14 marc_888 Task Status of Volume gluster_volume
22:14 marc_888 ------------------------------------------------------------------------------
22:14 glusterbot marc_888: ----------------------------------------------------------------------------'s karma is now -25
22:14 marc_888 There are no active volume tasks
22:17 marc_888 Sorry, i will use that.
22:18 marc_888 Could it be because i have in /etc/hosts floating ip's because i have the cluster across datacenters?
22:19 JoeJulian Only if those ips floated. The self-heal daemon (and everything else) needs to reach all the bricks by hostname.
22:21 marc_888 Yes i am populating the /etc/hosts like this:  https://dpaste.de/vN4O
22:21 glusterbot Title: dpaste (at dpaste.de)
22:23 JoeJulian As a hosts file, it looks ok.
22:23 JoeJulian You said the ips float?
22:24 marc_888 The 10.x.y.z are Openstack floating ip's which are only reachable between datacenters but the the server itself does not know about it. I mean it does not appears in ifconf for example, you there have only the 192.x.y.z
22:26 JoeJulian Oh, ok, so the servers are VMs and the IPs follow those VMs.
22:26 marc_888 Yes
22:26 JoeJulian I hate that they used that term. Floating IPs are something else. :)
22:27 marc_888 Yes is not that intuitive...
22:27 JoeJulian Did you add the new vm to the correct security group?
22:27 JoeJulian Oh! speaking of security groups... the ip address range changed.
22:27 JoeJulian @ports
22:28 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
22:28 JoeJulian s/ip address/port/
22:28 marc_888 I just opened ports from 1-50000 to both tcp and udp and nothing. I also used gluster volume set gluster_volume auth.allow '*'
22:28 glusterbot JoeJulian: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
22:28 JoeJulian ok
22:29 JoeJulian Well, you're going to need to look in the logs. You could check the api log on the machine which you ran the command.
22:29 marc_888 I read the GlusterFS documentation thoroughly
22:29 JoeJulian I don't expect that'll be very helpful. But the glusterd log on that machine might be more helpful.
22:30 JoeJulian You're looking for which host it says is down.
22:31 tacoboy joined #gluster
22:31 rastar joined #gluster
22:34 marc_888 It always says that the localhost is down or not connected. For example a "heal info" will always say that "Status: Transport endpoint is not connected" and "Number of entries: -"
22:34 baber joined #gluster
22:34 marc_888 Only for the machine were i am running the command, is this normal?
22:34 nobody481 joined #gluster
22:35 JoeJulian I don't think so... Checking my own logs.
22:38 JoeJulian nope
22:38 JoeJulian it specifically say, "localhost"?
22:40 marc_888 In the heal command it outputs the hostname. I am looking now for glusterd log into the pp environment since i tried so many things in this test env which would also appear in the log
22:46 marc_888 PP is on 3.7.20 since today i did the upgrade and i see no glusterd.log. For the test environment i see this error also:
22:46 marc_888 E [MSGID: 106243] [glusterd.c:1796:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
22:47 marc_888 And in the "graph" from the log it also specifies: "option transport-type rdma"
22:47 JoeJulian 3.7 is probably /var/log/glusterfs/mnt-glusterfs-glusterd.vol.log
22:48 marc_888 I am using tcp, which is what is displayed from "gluster volume info"
22:49 JoeJulian for glusterd that's configured in /etc/glusterfs/glusterd.vol
22:49 marc_888 For 3.7 the error is a little different if matters, it looks for 24007 not 24008
22:50 marc_888 https://dpaste.de/0t8h
22:50 JoeJulian 24008 is rdma
22:50 glusterbot Title: dpaste (at dpaste.de)
22:50 JoeJulian 24007 is tcp
22:51 marc_888 10.96.209.230 is the "floating ip" of the respective host
22:55 marc_888 So in the "/etc/glusterfs/glusterd.vol" there is "option transport-type socket,rdma" ...
22:56 skylar1 joined #gluster
22:57 marc_888 From what i see on the glusterdocs: "If the transport type is not specified, tcp is used as the default."
23:02 ThHirsch joined #gluster
23:03 marc_888 ll
23:09 ThHirsch joined #gluster
23:17 farhorizon joined #gluster
23:21 pladd joined #gluster
23:27 JoeJulian marc_888: Sorry, was afk for a bit. yes, that's correct. So as it shows there it just tries to do both. Failing for rdma is fine.
23:30 masber joined #gluster
23:35 marc_888 And also for 3.10 version i don't see a similar error in glusterd.log. Now in 3.7 the mnt-gluster.log is flooded at every sec
23:35 marc_888 It is very strange that the replication works...
23:36 marc_888 Do you know for adding a new server into a replicated cluster, are these the correct steps?
23:37 marc_888 sudo gluster peer probe serverName
23:37 marc_888 gluster volume add-brick gluster_volume replica 5 serverName:/export_vdb force
23:38 marc_888 gluster volume heal gluster_volume full
23:39 JoeJulian "It is very strange that the replication works.." replication is done at the client, so it may not be that strange.
23:50 farhorizon joined #gluster
23:55 nobody481 left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary