Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-08-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 javi404 joined #gluster
00:31 zcourts joined #gluster
00:42 shyam joined #gluster
01:01 shyam joined #gluster
01:31 gyadav joined #gluster
01:40 susant joined #gluster
01:50 bEsTiAn joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:01 omie888777 joined #gluster
02:02 gospod2 joined #gluster
02:06 Guest9038 joined #gluster
02:06 luizcpg joined #gluster
02:13 jiffin joined #gluster
02:28 X-ian joined #gluster
02:45 bEsTiAn joined #gluster
02:53 susant joined #gluster
03:02 omie888777 joined #gluster
03:19 Guest9038 joined #gluster
03:25 shyu joined #gluster
03:28 jiffin joined #gluster
03:38 gyadav joined #gluster
03:43 dominicpg joined #gluster
03:45 bEsTiAn joined #gluster
03:49 itisravi joined #gluster
03:54 nbalacha joined #gluster
03:57 luizcpg joined #gluster
04:06 Guest9038 joined #gluster
04:15 riyas joined #gluster
04:24 ppai joined #gluster
04:31 MrAbaddon joined #gluster
04:41 kdhananjay joined #gluster
05:11 sanoj joined #gluster
05:21 skumar joined #gluster
05:25 hgowtham joined #gluster
05:33 gyadav_ joined #gluster
05:33 msvbhat joined #gluster
05:37 ndarshan joined #gluster
05:40 CmndrSp0ck joined #gluster
05:42 apandey joined #gluster
05:44 bEsTiAn joined #gluster
05:47 sona joined #gluster
05:53 karthik_us joined #gluster
05:58 gyadav__ joined #gluster
06:01 CmndrSp0ck joined #gluster
06:03 neilos joined #gluster
06:04 prasanth joined #gluster
06:06 neilos hey guys, we have a spewing logfile, brick1 of a 40 brick, 20 server cluster, replicate level 2. This is part of the brick log, that repeats many many times, https://paste.ee/p/Dqdde
06:06 glusterbot Title: Paste.ee - gluster-brick-log (at paste.ee)
06:07 neilos we have restarted server, performed xfs_repair on brick filesystem. Wondering why there is so many ../ links. Running Glusterfs 3.6.3, planning upgrade to 3.10.5 soon.
06:10 rafi1 joined #gluster
06:15 marlinc joined #gluster
06:15 ankitr joined #gluster
06:17 fsimonce joined #gluster
06:19 jiffin joined #gluster
06:23 _KaszpiR_ joined #gluster
06:24 mbukatov joined #gluster
06:25 sanoj joined #gluster
06:33 poornima joined #gluster
06:41 jtux joined #gluster
06:42 skoduri joined #gluster
06:45 bEsTiAn joined #gluster
06:59 omie88877777 joined #gluster
07:17 jkroon joined #gluster
07:23 ivan_rossi joined #gluster
07:39 bEsTiAn joined #gluster
07:45 msvbhat joined #gluster
08:07 MikeLupe joined #gluster
08:13 prasanth joined #gluster
08:16 _KaszpiR_ joined #gluster
08:17 apandey joined #gluster
08:28 zcourts joined #gluster
08:34 _KaszpiR_ joined #gluster
08:49 rafi joined #gluster
08:53 shyu joined #gluster
08:53 msvbhat joined #gluster
09:01 karthik_ joined #gluster
09:01 buvanesh_kumar joined #gluster
09:06 zcourts joined #gluster
09:19 koolfy hello
09:19 glusterbot koolfy: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
09:19 koolfy oh not again :(
09:20 koolfy so, in what case could the glusterfs client return ENODATA for a file on a gluster mount point
09:20 koolfy while the file in the local volume is actually fije?
09:20 koolfy fine*
09:20 koolfy for example:
09:20 koolfy -rw-r--r-- 2 1000 ftp 23K Nov  8  2016 /mnt/jbod1/gluster/ftp/.glusterfs/01/67/0167ee87-3be0-4c54-abee-c148a13aab9f
09:20 glusterbot koolfy: -rw-r--r's karma is now -26
09:21 koolfy this file is fine and can be read
09:21 koolfy it's hardlink outside .glusterfs/ too
09:21 koolfy but it's representation on the gluster mountpoint is not
09:21 koolfy and it has these attributes:
09:21 koolfy ---------T 1 root root 0 Jan 12  2017
09:22 glusterbot koolfy: -------'s karma is now -13
09:22 koolfy size zero, permissions pretty much zero too
09:22 koolfy it looks like gluster has just a generic null representation of the file, while both replica bricks actually containn the proper file, at the proper location, with the proper gfid
09:23 koolfy (I'm using gluster 3.6.9)
09:27 ThHirsch joined #gluster
09:27 shyu joined #gluster
09:36 jkroon_ joined #gluster
09:39 misc mhh, 3.6.9 is quite old :/
09:45 zcourts_ joined #gluster
09:48 jiffin koolfy: did u find any similar file with same permission at the backend?
09:52 apandey joined #gluster
09:58 koolfy jiffin: no the mirror file in the brick is fine, has data and proper permission
09:58 koolfy as far as I can tell
10:00 jiffin koolfy: can u mount it on another machine and see what happens
10:00 jiffin ?
10:01 jiffin koolfy: u are saying there is no other with same name existing with that permission on the backend
10:02 koolfy several machires have the volume mounted and have the same error on the file
10:03 koolfy jiffin: correct, the file with the same name in the brick is fine, has data, and proper permissions
10:04 jiffin 'T' files are usually generates during rename/rebalance. They are link files which points to the actual file.
10:05 jiffin for ex consider 3 brick distributed volume with bricks b1,b2,b3
10:06 jiffin now say file with name 'a' created from mount point
10:06 jiffin which resides in b1
10:07 jiffin now if rename 'a' to 'b'
10:07 jiffin consider new file 'b' should map to b2
10:08 jiffin but since to data migration(moving a from b1 to b2 and rename to b),  gluster rename a to b in b1 and creates T file in b2
10:08 jiffin with same name
10:09 jiffin so the stale 'T' may be related to some rename calls
10:11 jiffin IMO there should be some stale 'T" files at backend otherwise it should not shown in mount point
10:11 jiffin or atleast in .glusterfs folder with different gfid
10:11 omie888777 joined #gluster
10:16 psony joined #gluster
10:20 koolfy thanks
10:20 koolfy I'll look for them :)
10:20 koolfy that would explain a lot
10:29 cliluw joined #gluster
10:34 shyam joined #gluster
10:36 msvbhat joined #gluster
10:44 rafi1 joined #gluster
10:49 ThHirsch joined #gluster
10:51 zcourts joined #gluster
10:52 ankitr joined #gluster
11:01 zcourts_ joined #gluster
11:03 sanoj joined #gluster
11:18 nisroc joined #gluster
11:18 sanoj joined #gluster
11:18 msvbhat joined #gluster
11:27 baojg joined #gluster
11:28 WebertRLZ joined #gluster
11:34 karthik_us joined #gluster
11:48 itisravi joined #gluster
12:08 kotreshhr joined #gluster
12:11 X-ian_ joined #gluster
12:15 shyam joined #gluster
12:26 msvbhat joined #gluster
12:29 fidelrodriguez joined #gluster
12:29 fidelrodriguez Hello everyone
12:30 fidelrodriguez I need help in setting up hot tier. if anyone can look at my bug post I will greatly appreciate it. https://bugzilla.redhat.com/show_bug.cgi?id=1484156
12:30 glusterbot Bug 1484156: urgent, unspecified, ---, bugs, NEW , Can't attach volume tier to create  hot tier
12:30 gyadav__ joined #gluster
12:31 MikeLupe joined #gluster
12:33 hgowtham fidelrodriguez, hi! can you say the steps you performed?
12:33 fidelrodriguez sure one moment
12:33 fidelrodriguez in the bug or through here?
12:34 hgowtham i need the gluster volume status output
12:34 hgowtham it misses that
12:34 hgowtham anyways its fine
12:35 fidelrodriguez going to get it for you now
12:35 fidelrodriguez one moment
12:35 hgowtham sure
12:36 fidelrodriguez Status of volume: vmVolume
12:36 fidelrodriguez Gluster process                             TCP Port  RDMA Port  Online  Pid
12:36 fidelrodriguez ------------------------------------------------------------------------------
12:36 fidelrodriguez Brick 172.16.0.11:/vmVolume/.bricksvm       49154     0          Y       3230
12:36 fidelrodriguez Brick 172.16.0.11:/vmVolume2/.bricksvm      49155     0          Y       3213
12:36 glusterbot fidelrodriguez: ----------------------------------------------------------------------------'s karma is now -23
12:36 fidelrodriguez Brick 172.16.0.12:/vmVolume/.bricksvm       49154     0          Y       2348
12:36 fidelrodriguez Brick 172.16.0.12:/vmVolume2/.bricksvm      49155     0          Y       2375
12:36 fidelrodriguez Brick 172.16.0.13:/vmVolume/.bricksvm       49154     0          Y       3216
12:36 fidelrodriguez Brick 172.16.0.13:/vmVolume2/.bricksvm      49155     0          Y       3225
12:36 fidelrodriguez Brick 172.16.0.14:/vmVolume/.bricksvm       49154     0          Y       3203
12:36 fidelrodriguez Brick 172.16.0.14:/vmVolume2/.bricksvm      49155     0          Y       3209
12:36 fidelrodriguez Self-heal Daemon on localhost               N/A       N/A        Y       3312
12:36 fidelrodriguez Self-heal Daemon on 172.16.0.14             N/A       N/A        Y       3366
12:36 fidelrodriguez Self-heal Daemon on 172.16.0.12             N/A
12:37 * hgowtham checking
12:37 hosom joined #gluster
12:37 fidelrodriguez let me put it in the bug report as well. the output in here didn't show up nicely
12:38 hgowtham fidelrodriguez, sure
12:39 fidelrodriguez i updated the bug report with more info https://bugzilla.redhat.com/show_bug.cgi?id=1484156
12:39 glusterbot Bug 1484156: urgent, unspecified, ---, bugs, NEW , Can't attach volume tier to create  hot tier
12:41 kotreshhr left #gluster
12:42 fidelrodriguez I added a HDD configuration diagram to further explain my setup
12:45 hgowtham can you attach the logs too?
12:46 fidelrodriguez does the caching ssd drive need to be in the same volume as the  vmVolume is in? I don't see why it needs to since I was able to create vmVolume with bricks on different Physical volumes
12:47 hgowtham the ssd drives are also similar to the hdd bricks you used when its being mounted
12:47 hgowtham only when you add the ssd drives to the volume (the one with hdd) the whole volume becomes a tiered volume
12:48 hgowtham only after you attach the ssd it becomes a part of the vmVolume
12:49 fidelrodriguez thats what I was thinking which is why I don't why it didn't work. I am getting you the logs now. any specific one you are looking for? cli.log or glusterd.log
12:49 hgowtham both cli and glusterd are necessary
12:50 fidelrodriguez glusterd.log is not getting updated when I run gluster volume tier vmVolume attach replica 2 glusterfs1:/caching/.brickscaching  glusterfs2:/caching/.brickscaching  glusterfs3:/caching/.brickscaching glusterfs4:/caching/.brickscaching  force
12:51 fidelrodriguez total 1520
12:51 fidelrodriguez drwxr-xr-x 5 root root     59 Aug 21 12:33 snaps
12:51 fidelrodriguez -rw------- 1 root root  18538 Aug 21 17:30 vmVolume-rebalance.log-20170827
12:51 fidelrodriguez -rw------- 1 root root   1314 Aug 21 17:57 vmVolume2-Vm.log-20170827
12:51 fidelrodriguez drwxr-xr-x 3 root root     17 Aug 22 12:34 geo-replication-slaves
12:51 glusterbot fidelrodriguez: -rw-----'s karma is now -16
12:51 glusterbot fidelrodriguez: -rw-----'s karma is now -17
12:51 fidelrodriguez drwxr-xr-x 2 root root      6 Aug 22 12:34 geo-replication
12:51 fidelrodriguez -rw------- 1 root root  56131 Aug 22 14:15 root-isoVolume-Iso.log-20170827
12:51 glusterbot fidelrodriguez: -rw-----'s karma is now -18
12:51 fidelrodriguez -rw------- 1 root root  57411 Aug 22 14:15 engineVolume-Engine.log-20170827
12:51 glusterbot fidelrodriguez: -rw-----'s karma is now -19
12:51 fidelrodriguez -rw------- 1 root root 140593 Aug 22 14:15 vmVolume-Vm.log-20170827
12:51 glusterbot fidelrodriguez: -rw-----'s karma is now -20
12:52 fidelrodriguez -rw------- 1 root root  15310 Aug 22 16:49 cmd_history.log-20170827
12:52 glusterbot fidelrodriguez: -rw-----'s karma is now -21
12:52 fidelrodriguez -rw------- 1 root root 180498 Aug 22 16:49 cli.log-20170827
12:52 glusterbot fidelrodriguez: -rw-----'s karma is now -22
12:52 fidelrodriguez -rw------- 1 root root 284876 Aug 22 16:51 glusterd.log-20170827
12:52 glusterbot fidelrodriguez: -rw-----'s karma is now -23
12:52 fidelrodriguez -rw------- 1 root root      0 Aug 27 03:22 vmVolume-Vm.log
12:52 glusterbot fidelrodriguez: -rw-----'s karma is now -24
12:52 fidelrodriguez -rw------- 1 root root      0 Aug 27 03:22 vmVolume-rebalance.log
12:52 glusterbot fidelrodriguez: -rw-----'s karma is now -25
12:52 fidelrodriguez -rw------- 1 root root      0 Aug 27 03:22 vmVolume2-Vm.log
12:52 glusterbot fidelrodriguez: -rw-----'s karma is now -26
12:52 fidelrodriguez -rw------- 1 root root      0 Aug 27 03:22 root-isoVolume-Iso.log
12:52 glusterbot fidelrodriguez: -rw-----'s karma is now -27
12:52 fidelrodriguez -rw------- 1 root root      0
12:52 glusterbot fidelrodriguez: -rw-----'s karma is now -28
12:52 fidelrodriguez
12:52 hgowtham fidelrodriguez, can you use paste bin?
12:52 ndevos ~paste | fidelrodriguez
12:52 glusterbot fidelrodriguez: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
12:53 fidelrodriguez the client I am using to chat sucks. I am new to irc as well.
12:54 fidelrodriguez https://pastebin.com/embed_js/2SN5HfR3
12:54 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:54 fidelrodriguez https://paste.fedoraproject.org/paste/3VA3zosAL-Y7IGp5iPiLfg
12:54 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
12:56 hgowtham fidelrodriguez, can you attach the logs on the bug itself? those in the paste bin are just the files. i need the content in them
12:57 fidelrodriguez I was showing you that the log was not getting updated after i ran the command
12:58 hgowtham glusterd log might not get updated. but the cli.log and cli_history log will get updated
12:59 hgowtham can you post those two?
13:01 fidelrodriguez posting them now
13:02 fidelrodriguez the three logs only got updated by me using the old volume attach-tier method
13:02 fidelrodriguez logs are attached in bug ticket
13:03 plarsen joined #gluster
13:05 hgowtham the cli.log
13:05 hgowtham you have posted the cmd_history for that one too
13:07 hgowtham fidelrodriguez, is this using ovirt?
13:09 Guest9038 joined #gluster
13:10 fidelrodriguez i havent install ovirt yet but that is the plan
13:11 fidelrodriguez i am tryingt o setup gluster volume how I want them first before I move into the ovirt installation
13:12 hgowtham the address resolution on the node is not working fine
13:12 fidelrodriguez I posted the whole logs. cmd_history and glusterd log only got updated when running the old method by using volume attached-tier command
13:13 hgowtham yes okie
13:13 fidelrodriguez i put the gluster volumes in a bond0 to seperate networks could this be the problem
13:15 fidelrodriguez https://paste.fedoraproject.org/paste/8f~ogFL-GwAgm-wr3EPJqg
13:15 glusterbot Title: ifconfig -a - Modern Paste (at paste.fedoraproject.org)
13:15 fidelrodriguez of the ifconfig -a settup
13:15 MrAbaddon joined #gluster
13:16 hgowtham i suspect these network changes could have caused the problem. i'm not an expert in this. need to talk to others regarding this.
13:16 hgowtham it would work fine is you use your default network
13:16 baber joined #gluster
13:19 fidelrodriguez i was thinking that might be the problem. what is the default network? I want to isolate the traffic from volume and main network traffic
13:20 hgowtham fidelrodriguez, eth0 works
13:22 jstrunk joined #gluster
13:24 fidelrodriguez what I am confused about is why glsuetr let me create a regular volume to a bond interface but wont let me attach a tier to the same volume
13:24 jiffin joined #gluster
13:24 fidelrodriguez do I need to change my /etc/hosts file
13:25 hgowtham usually thats not required
13:27 fidelrodriguez should i be using FQDN instead of just hostname?
13:27 hgowtham if it wasnt able to resolve it would have error out saying host not in cluster
13:27 hgowtham you can try that if you still have the volume
13:29 hgowtham can you try it with the ip?
13:30 fidelrodriguez internet address ' 172.16.0.11' does not conform to standards
13:30 fidelrodriguez when using the old method
13:30 fidelrodriguez volume attach-tier: failed: Pre-validation failed on localhost. Please check log file for details
13:31 skylar joined #gluster
13:43 fidelrodriguez hgowtham, are you there?
13:50 luizcpg joined #gluster
13:50 fidelrodriguez Does anyone have experience setting hot tier on bond interface?
13:53 skumar joined #gluster
13:54 skoduri joined #gluster
13:57 bwerthmann joined #gluster
14:03 humblec joined #gluster
14:03 fidelrodriguez Does anyone know how to fix this error? Failed to convert hostname  172.16.0.11 to uuid
14:04 ppai joined #gluster
14:14 baojg joined #gluster
14:33 bfoster joined #gluster
14:39 skumar joined #gluster
14:49 sona joined #gluster
14:54 farhorizon joined #gluster
15:03 dbagwell joined #gluster
15:04 ThHirsch joined #gluster
15:07 wushudoin joined #gluster
15:15 prasanth joined #gluster
15:19 btspce joined #gluster
15:30 fidelrodriguez hello is anyone available
15:32 cloph depends - ask the question, then people can tell...
15:32 fidelrodriguez What is the best practice in setting up hot tier to do caching? is it safe to do it on  a bond interface?
15:33 fidelrodriguez to isolate gluster traffic
15:47 MikeLupe joined #gluster
15:51 kpease joined #gluster
16:14 skumar joined #gluster
16:16 vbellur1 joined #gluster
16:16 vbellur joined #gluster
16:21 nirokato joined #gluster
16:22 baber joined #gluster
16:28 vbellur joined #gluster
16:29 vbellur joined #gluster
16:33 vbellur1 joined #gluster
16:41 gyadav__ joined #gluster
16:45 vbellur joined #gluster
16:46 vbellur1 joined #gluster
16:52 msvbhat joined #gluster
16:53 Utoxin left #gluster
16:54 gyadav joined #gluster
16:57 baber joined #gluster
16:58 bwerthmann joined #gluster
16:58 fidelrodriguez does anyone recommend a good irc client? the one I am using for mac is not great.
17:06 fidelrodriguez I am newbie to irc and asking for help on irc community. Please let me know if I did something wrong.
17:07 fidelrodriguez https://bugzilla.redhat.com/show_bug.cgi?id=1484156
17:07 glusterbot Bug 1484156: urgent, unspecified, ---, bugs, NEW , Can't attach volume tier to create  hot tier
17:08 gyadav joined #gluster
17:09 rafi joined #gluster
17:11 gyadav joined #gluster
17:21 merps joined #gluster
17:26 vbellur joined #gluster
17:37 farhorizon joined #gluster
17:48 vbellur joined #gluster
17:48 vbellur1 joined #gluster
17:49 vbellur joined #gluster
17:58 vbellur joined #gluster
17:58 zcourts joined #gluster
17:58 vbellur joined #gluster
17:59 vbellur joined #gluster
18:00 vbellur joined #gluster
18:01 farhorizon joined #gluster
18:05 btspce Would like to know what raid configuration and stripe size everyone is using for their qcow/raw images, if any testing of different stripe sizes was done and what the findings were?
18:05 vbellur joined #gluster
18:14 baojg joined #gluster
18:38 _KaszpiR_ joined #gluster
18:57 Gambit15 joined #gluster
19:04 MrAbaddon joined #gluster
19:10 Vapez joined #gluster
19:13 vaxxon joined #gluster
19:28 baber joined #gluster
19:29 msvbhat joined #gluster
19:35 jkroon joined #gluster
19:36 merp_ joined #gluster
19:45 baber joined #gluster
20:00 shyam joined #gluster
20:07 baojg_ joined #gluster
20:17 baber joined #gluster
20:34 bowhunter joined #gluster
20:34 vbellur joined #gluster
20:35 vbellur joined #gluster
20:36 vbellur joined #gluster
20:37 vbellur1 joined #gluster
20:38 vbellur joined #gluster
20:41 merp_ joined #gluster
20:51 omie888777 joined #gluster
21:21 ThHirsch joined #gluster
22:27 ndevos joined #gluster
22:30 ron-slc joined #gluster
22:38 vbellur joined #gluster
23:07 omie888777 joined #gluster
23:46 luizcpg joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary