Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-08-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 shyam joined #gluster
00:10 anthony25 joined #gluster
00:20 anthony25 joined #gluster
00:31 anthony25 joined #gluster
00:47 anthony25 joined #gluster
01:07 anthony25 joined #gluster
01:09 h4rry joined #gluster
01:14 squeakyneb joined #gluster
01:18 anthony25 joined #gluster
01:27 anthony25 joined #gluster
01:37 Jacob843 joined #gluster
01:37 anthony25 joined #gluster
01:38 atinm joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:53 prasanth joined #gluster
01:58 anthony25 joined #gluster
02:08 anthony25 joined #gluster
02:19 anthony25 joined #gluster
02:29 anthony25 joined #gluster
02:41 anthony25 joined #gluster
02:44 AshishS joined #gluster
03:00 anthony25 joined #gluster
03:11 anthony25 joined #gluster
03:26 anthony25 joined #gluster
03:26 nbalacha joined #gluster
03:42 riyas joined #gluster
03:47 anthony25 joined #gluster
04:00 itisravi joined #gluster
04:06 susant joined #gluster
04:08 anthony25 joined #gluster
04:15 ppai joined #gluster
04:18 dominicpg joined #gluster
04:21 anthony25 joined #gluster
04:32 jiffin joined #gluster
04:37 PsionTheory joined #gluster
04:45 buvanesh_kumar joined #gluster
04:48 anthony25 joined #gluster
04:52 ndarshan joined #gluster
04:53 atinm joined #gluster
04:55 skumar joined #gluster
04:56 AshishS joined #gluster
05:01 karthik_us joined #gluster
05:07 anthony25 joined #gluster
05:11 h4rry joined #gluster
05:19 kdhananjay joined #gluster
05:21 anthony25 joined #gluster
05:21 rafi joined #gluster
05:34 msvbhat joined #gluster
05:36 rastar joined #gluster
05:37 rastar joined #gluster
05:42 skumar_ joined #gluster
05:44 apandey joined #gluster
05:44 poornima_ joined #gluster
05:50 ankitr joined #gluster
05:52 anthony25 joined #gluster
05:53 ankitr joined #gluster
05:56 sanoj joined #gluster
06:04 skoduri joined #gluster
06:04 ashiq joined #gluster
06:04 aravindavk joined #gluster
06:06 skumar__ joined #gluster
06:08 vpapnoi joined #gluster
06:14 skumar_ joined #gluster
06:15 sanoj joined #gluster
06:17 jyasveer joined #gluster
06:21 jyasveer joined #gluster
06:22 anthony25 joined #gluster
06:26 anthony25_ joined #gluster
06:37 jtux joined #gluster
06:41 skumar__ joined #gluster
06:41 ppai joined #gluster
06:47 rafi2 joined #gluster
06:50 [diablo] joined #gluster
06:51 _KaszpiR_ joined #gluster
06:59 jtux joined #gluster
07:03 susant joined #gluster
07:07 ogelpre joined #gluster
07:10 victori_ joined #gluster
07:13 skumar_ joined #gluster
07:20 mbukatov joined #gluster
07:21 the-me joined #gluster
07:23 rastar joined #gluster
07:38 atinm joined #gluster
07:47 gyadav joined #gluster
08:01 gyadav joined #gluster
08:07 gyadav_ joined #gluster
08:08 ThHirsch joined #gluster
08:08 bartden joined #gluster
08:09 bartden Hi, is it possible to use TLS mutual auth from a client to multiple glusterfs clusters? Because these clusters have certificates signed by different CA's
08:13 ankitr joined #gluster
08:26 ankitr joined #gluster
08:32 ankitr joined #gluster
08:43 skoduri joined #gluster
08:47 _KaszpiR_ joined #gluster
08:53 sanoj joined #gluster
08:55 mahendratech joined #gluster
08:55 skumar_ joined #gluster
09:04 msvbhat joined #gluster
09:05 _KaszpiR_ joined #gluster
09:05 kotreshhr joined #gluster
09:13 h4rry joined #gluster
09:19 skumar joined #gluster
09:23 BitByteNybble110 joined #gluster
09:24 om2 joined #gluster
09:56 rafi joined #gluster
10:14 kotreshhr joined #gluster
10:20 msvbhat joined #gluster
10:28 shyam joined #gluster
10:37 ThHirsch joined #gluster
10:42 ankitr joined #gluster
10:46 saltsa joined #gluster
10:48 skumar joined #gluster
10:52 ppai joined #gluster
10:55 ahino joined #gluster
11:09 jkroon joined #gluster
11:12 baber joined #gluster
11:23 WebertRLZ joined #gluster
11:53 baojg joined #gluster
12:01 shyam joined #gluster
12:17 kotreshhr left #gluster
12:37 baber joined #gluster
12:40 susant joined #gluster
12:46 kramdoss_ joined #gluster
12:51 federicoaguirre joined #gluster
12:51 federicoaguirre Hi there.!
12:51 federicoaguirre I'm a 2 bricks replication volume...
12:52 federicoaguirre and I'm dealing with High CPU
12:53 federicoaguirre client.event-threads: 4
12:53 federicoaguirre server.event-threads: 4
12:53 federicoaguirre could it be related to:
12:53 federicoaguirre client and server event-thread?
12:56 vaboston joined #gluster
12:58 vaboston Hi, I have a question, if I add a new node in a replicated type volume, existind data in a volume will be replicated on the new node or not ?
13:07 pdrakeweb joined #gluster
13:20 bartden Hi, i keep getting “E [socket.c:4310:socket_init] 0-tcp.s_cluster_2-server: could not load private key” when trying to start a gluster volume with server and client SSL on. I generated a private key with openssl genrsa -out /etc/ssl/glusterfs.key 2048 (permissions of key file root:root 644)
13:23 Ulrar Any idea how gfids are generated when sharding is enabled ? Is it some hash of the filename, or completly random ?
13:24 kdhananjay Ulrar: completely random
13:24 Ulrar Great
13:25 Ulrar Really hoping they'll be able to recover the deleted shards from yesterday
13:27 ankitr joined #gluster
13:29 buvanesh_kumar joined #gluster
13:35 shyam joined #gluster
13:43 skylar joined #gluster
13:51 riyas joined #gluster
13:52 plarsen joined #gluster
13:57 vbellur joined #gluster
13:57 baber joined #gluster
14:00 vbellur joined #gluster
14:01 vbellur joined #gluster
14:05 shyam joined #gluster
14:06 kramdoss_ joined #gluster
14:13 poornimag joined #gluster
14:21 vbellur joined #gluster
14:21 susant joined #gluster
14:23 shyam joined #gluster
14:30 msvbhat joined #gluster
14:41 federicoaguirre anyone know how can I get the file path from the gfid ????? getattr is not working
14:43 federicoaguirre I have in both nodes <gfid:8516b9ef-136b-407a-b5fd-d4b9606d21b5> - Is in split-brain
14:43 federicoaguirre but I dont have the file path
14:46 buvanesh_kumar joined #gluster
14:46 poornimag federicoaguirre, https://gluster.readthedocs.io/en/latest/Troubleshooting/gfid-to-path/
14:46 glusterbot Title: gfid to path - Gluster Docs (at gluster.readthedocs.io)
14:47 poornimag federicoaguirre, if none of those work the other crude way is to check on the bricks
14:47 federicoaguirre thanks poornimag
14:47 federicoaguirre I've read this...
14:47 federicoaguirre and didn't work...
14:48 federicoaguirre Opperation not supported or No such attribute
14:48 poornimag federicoaguirre, lets say you have 2 bricks: Host1:/brick1 host2:/brick2, then login to host1 and try to list the path /brick1/.glusterfs/85/6b/85*
14:48 federicoaguirre got it.!
14:49 poornimag federicoaguirre, if you don't find it on host1, try to do the same on host2
14:49 federicoaguirre that's I did
14:49 farhorizon joined #gluster
14:49 federicoaguirre the gfid is on both
14:49 poornimag federicoaguirre, i hope you don't have 100s of nodes,
14:49 federicoaguirre same md5 on both
14:49 federicoaguirre ¿?
14:50 poornimag federicoaguirre, if the gfid is a symlink, then its a directory, ll resolves the symlink and shows the original file
14:50 ackjewt joined #gluster
14:51 federicoaguirre no it is a regular file, not a symlink
14:51 federicoaguirre those are just files
14:52 poornimag federicoaguirre, if the gfid file is a hardlink, then you will have to find its inode by using ls -i  /brick/.glusterfs/85/16/8516b9ef-136b-407a-b5fd-d4b9606d21b5
14:53 poornimag federicoaguirre, and use find command to find other files with the same inode, this list will contain the original file
14:53 poornimag federicoaguirre, find -samefile  /brick/.glusterfs/85/16/8516b9ef-136b-407a-b5fd-d4b9606d21b5
14:54 vbellur federicoaguirre: if it is a metadata split-brain, please check all attributes and extended attributes related to the file..
14:54 fabianvf joined #gluster
14:55 poornimag federicoaguirre, vbellur i am no sure if there are any self heal commands that actually lists the files in splitbrain?
14:55 federicoaguirre yep..
14:55 federicoaguirre gluster volume heal storage info
14:56 federicoaguirre I've got the inode for the gfid...
14:56 vbellur poornimag: the log files provide information about the nature of split-brain too
14:56 poornimag federicoaguirre, gluster volume heal info split-brain didn't this list the file name?
14:56 federicoaguirre I understood the next stop..
14:56 federicoaguirre nop, just the gfid
14:57 vbellur poornimag: self-heal info does a inode_grep/inode_path operation to locate the path.. if there's no entry in inode table, we only get gfid in the output..
14:58 poornimag vbellur, ahh ok
14:59 ackjewt Hello. We're using GlusterFS 3.10.3 replicated (2 nodes + 1 arbiter) and we can't modify the ACLs on the volume (or subfolders) with setfacl. We're using XFS as backend storage, which have ACL support enabled by default. Getting "Operation not supported" and i don't see anything in the logs
14:59 federicoaguirre just the gfid got this inode
14:59 skoduri joined #gluster
15:00 poornimag federicoaguirre, can you paste the output of /brick/.glusterfs/85/16/8516b9ef-136b-407a-b5fd-d4b9606d21b5 from all the nodes where this file is present?
15:00 federicoaguirre sure
15:00 poornimag ackjewt, mount command should have -o acl option
15:00 federicoaguirre ls ? cat ?
15:01 poornimag ls
15:01 poornimag i mean ls -l
15:01 federicoaguirre ok
15:01 poornimag to check the link count
15:01 federicoaguirre brick1: -rwxrwxrwx 2 root root 30759 Mar 12  2015 /var/data/fileserver/.glusterfs/aa/b7/aab7d7a6-8f1c-49ac-9ced-fa3d6c1227a3
15:01 wushudoin joined #gluster
15:02 federicoaguirre brick2: -rwxrwxrwx 2 root root 30759 Mar 12  2015 /var/data/fileserver/.glusterfs/aa/b7/aab7d7a6-8f1c-49ac-9ced-fa3d6c1227a3
15:02 federicoaguirre this is one of "In Split Brain"
15:03 poornimag federicoaguirre, so the link count is 2, there should be another file with the same inode number in /var/data/fileserver/
15:03 ackjewt poornimag: for the XFS mount or the glusterfs mount which i've mounted locally (-t glusterfs)?
15:04 poornimag ackjewt for the gluster mount
15:04 federicoaguirre finding
15:08 PsionTheory joined #gluster
15:10 riyas joined #gluster
15:10 ackjewt poornimag: thanks, seems like it helped. :) Strange that the docs are saying that you should "Remount the backend file system with "-o acl" option." and nothing about having -o acl on the gluster mount itself.
15:10 federicoaguirre it could take long time...
15:11 federicoaguirre poornimag: CPU load increase to 100 % every 10 minutes... Constantly. Could be the heal proccess run every 10 minutos?
15:15 h4rry joined #gluster
15:17 ankitr joined #gluster
15:19 kramdoss_ joined #gluster
15:26 jiffin joined #gluster
15:33 sahina joined #gluster
15:45 ThHirsch joined #gluster
15:50 vbellur joined #gluster
15:50 kpease joined #gluster
15:52 riyas joined #gluster
15:54 JoeJulian federicoaguirre: it does, yes.
15:57 sahina joined #gluster
16:01 riyas joined #gluster
16:06 sahina joined #gluster
16:09 msvbhat joined #gluster
16:13 kramdoss_ joined #gluster
16:18 baber joined #gluster
16:25 federicoaguirre JoeJulian: thanks.!
16:37 federicoaguirre if the heal procces has no files to heal.. It's normal it consume high CPU ?
16:38 JoeJulian I can't think of any reason why it should.
16:38 JoeJulian It is really, or is it all wait time?
16:40 federicoaguirre what is the URL to share an image?
16:40 JoeJulian there's no favorite
16:40 JoeJulian I'm open to suggestions.
16:43 federicoaguirre http://picpaste.com/2-Q82gcIiQ.png
16:45 JoeJulian The only "normal" event that should cause that is something trying to heal and that brick server is calculating hashes.
16:46 federicoaguirre Brick prd-zetech-glusterfs-01:/var/data/fileserver
16:46 federicoaguirre Status: Connected
16:46 federicoaguirre Number of entries: 0
16:46 federicoaguirre Brick prd-zetech-glusterfs-02:/var/data/fileserver
16:47 federicoaguirre Status: Connected
16:47 federicoaguirre Number of entries: 0
16:47 JoeJulian I assume you've checked `gluster volume heal $vol statistics`. The next things to check would be the brick log for that brick and, perhaps, glustershd.log
16:47 federicoaguirre there are no entries to heal
16:48 JoeJulian Heals can come from clients, too, and would be in the client log. That /should/ show up in the heal info though.
16:49 JoeJulian You can disable client-side data heals by setting cluster.data-self-heal off
16:50 JoeJulian You could also see that without anything needing healed if you've triggered a "heal $vol full"
16:50 vbellur federicoaguirre: you could also use something like perf or strace to see where the busy brick is spending time
16:58 ankitr joined #gluster
17:05 baber joined #gluster
17:08 federicoaguirre Type of crawl: INDEX
17:08 federicoaguirre No. of entries healed: 1
17:08 federicoaguirre No. of entries in split-brain: 96
17:08 federicoaguirre No. of heal failed entries: 1
17:08 federicoaguirre how can I see those 96 files?
17:11 baojg joined #gluster
17:15 JoeJulian `heal $vol info split-brain`? (not sure if that's still valid... that interface keeps changing)
17:23 rwheeler joined #gluster
17:28 farhorizon joined #gluster
17:33 farhorizon joined #gluster
17:39 _KaszpiR_ joined #gluster
17:45 jkroon joined #gluster
18:10 baber joined #gluster
18:10 farhorizon joined #gluster
18:12 baojg joined #gluster
18:45 atinm joined #gluster
19:00 rafi joined #gluster
19:13 baojg joined #gluster
19:17 h4rry joined #gluster
19:30 federicoaguirre hi there.. this variable.. server.event-threads
19:30 federicoaguirre is set on 4
19:30 federicoaguirre I've a 2 cores server..
19:31 federicoaguirre what is the proper configuration?
19:31 federicoaguirre also, this are the configs I have...
19:31 federicoaguirre Options Reconfigured:
19:31 federicoaguirre server.event-threads: 4
19:31 federicoaguirre client.event-threads: 4
19:31 federicoaguirre cluster.lookup-optimize: on
19:31 federicoaguirre performance.write-behind-window-size: 1MB
19:31 federicoaguirre performance.client-io-threads: on
19:31 federicoaguirre performance.io-thread-count: 32
19:31 federicoaguirre performance.read-ahead: disable
19:31 federicoaguirre nfs.disable: on
19:32 federicoaguirre performance.readdir-ahead: enable
19:32 federicoaguirre cluster.readdir-optimize: on
19:32 federicoaguirre performance.quick-read: on
19:32 federicoaguirre performance.io-cache: on
19:32 federicoaguirre performance.write-behind: on
19:32 federicoaguirre storage.build-pgfid: off
19:32 federicoaguirre think any of this can affect my poor performace?
19:34 federicoaguirre I have many files copied from FTP
19:50 baber joined #gluster
19:54 tg2 joined #gluster
20:06 jkroon joined #gluster
20:09 MadPsy joined #gluster
20:09 MadPsy joined #gluster
20:14 baojg joined #gluster
20:31 federicoaguirre I have many files copied from FTP
20:34 farhorizon joined #gluster
20:39 farhorizon joined #gluster
20:56 farhorizon joined #gluster
21:13 Peppard joined #gluster
21:14 level7 joined #gluster
21:15 baojg joined #gluster
21:16 shyam joined #gluster
21:27 plarsen joined #gluster
21:32 vbellur1 joined #gluster
21:33 vbellur2 joined #gluster
21:34 vbellur1 joined #gluster
21:35 vbellur joined #gluster
21:36 vbellur1 joined #gluster
21:37 vbellur joined #gluster
21:47 level7_ joined #gluster
22:00 vbellur joined #gluster
22:02 Klas joined #gluster
22:07 vbellur joined #gluster
22:12 ThHirsch joined #gluster
22:16 plarsen So I have a bit of a conumdrum ... one of my bricks (nodes) in a replication setup had a "melt down" and it's boot disk got hoased. Reinstalling and remouting the data, I now have a gluster node with no volumes defined, but all the data/mount points are there. I'm trying to restablish the peer setup, but this is where things are odd
22:16 plarsen On the node that USED to work, I was dumb enough to restart glusterd. Doing so is resulting in a failure in startup because there's no quorum.  I cannot add/peer the "new" node since glusterd isn't running on the other node.
22:17 plarsen Before I restarted glusterd, when I tried to peer it from THAT node I was told since the volume was out of quorum that it refused to do anything with the peers
22:18 plarsen So how do I proceed? I have all the data, all the state is there. On one node I can start glusterd but the voume definitions haven't been sync'ed over yet. The other glusterd refuses to start.
22:18 plarsen I presume the password/auth I see in the peer was lost when / was recreated on a new hdd? I didn't have a backup of /var/lib/gluster only the mount point of the data
22:19 JoeJulian You'll need to recreate /var/lib/glusterd/glusterd.info for the replaced server.
22:20 JoeJulian You can find the UUID from one of the existing peers (in /var/lib/glusterd/peers).
22:21 JoeJulian grep for the hostname. The filename is the uuid you need.
22:22 JoeJulian With the correct uuid, the other glusterd can recognize it as the missing peer and the volume info should sync over.
22:23 farhorizon joined #gluster
22:23 plarsen Thank JoeJulian - so I overwrite the UUID that's in that file right now?
22:23 plarsen (on the new host)
22:23 JoeJulian yes
22:24 JoeJulian stop glusterd first, change it, start glusterd.
22:26 plarsen JoeJulian, hmmm still complaining about no quorum so it refuses to start: Server quorum not met. Rejecting operation.
22:26 JoeJulian glusterd should not stop for server quorum.
22:27 JoeJulian glusterd --debug and ,,(paste) the results.
22:27 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
22:27 JoeJulian otherwise you would never be able to start a stopped cluster.
22:28 plarsen JoeJulian, https://paste.fedoraproject.org/paste/3KxCO4rwfLnzGu9mg4CdxQ
22:28 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
22:29 plarsen hmmmm "unable to find friend" I guess is the problem?
22:29 plarsen DNS is still the same.
22:31 JoeJulian Were your bricks all created by hostname, or did you use an ip address?
22:31 plarsen JoeJulian, that's for a volume I'm not using/needing right now.
22:31 plarsen The failed one was registered with the IP when it was created a long time ago.
22:31 plarsen So all the metadata just has the IP
22:32 JoeJulian It appears that inability to resolve is what's causing glusterd to exit.
22:32 plarsen THIS one I am not sure what glusternode1 is ... no such name on the network
22:36 JoeJulian hmm, that's pretty old and I think I remember some bug around this... looking.
22:40 plarsen JoeJulian, Somehow (this is really strange) the resolv.conf had removed the domain that hostname was in. Even stranger, resolving that didn't fix it initially. Systemd failed to start it with the same error, then I ran the debug and it WORKED. Then systemd would start it :) By then I also had the nodename added to /etc/hosts to be sure
22:40 JoeJulian I'm not having any luck. See if you can find which volume has that glusternode1 hostname - since you don't recognize it.
22:40 JoeJulian Ah, ok.
22:41 plarsen I think it may have been a hostname way back when gluster was first installed, but I've rebuild/redone the install several times since then, just not that brick
22:41 victori joined #gluster
22:42 plarsen YES!!!!!  Thank you SO much JoeJulian - I work up to a strange sound this morning and a crashed/melted disk (long story). I've been fighting this thing all day :(
22:43 plarsen I'll DEFINITELY remember the --debug option .. very handy
22:48 JoeJulian You're quite welcome. Glad you got it sussed.
23:01 caitnop joined #gluster
23:18 h4rry joined #gluster
23:19 baojg joined #gluster
23:19 cliluw joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary