Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 cyberbootje joined #gluster
00:49 purpleidea joined #gluster
00:49 purpleidea joined #gluster
01:19 daMaestro joined #gluster
01:37 shdeng joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:53 derjohn_mob joined #gluster
02:12 portdirect joined #gluster
02:29 sage_ joined #gluster
02:29 Utoxin joined #gluster
02:32 MadPsy joined #gluster
02:32 MadPsy joined #gluster
02:41 DV joined #gluster
02:44 Acinonyx joined #gluster
02:46 ira joined #gluster
02:51 yalu joined #gluster
03:06 magrawal joined #gluster
03:08 Acinonyx joined #gluster
03:38 atinm joined #gluster
03:41 ritzo joined #gluster
03:42 riyas joined #gluster
03:44 Prasad joined #gluster
03:48 itisravi joined #gluster
03:49 side_control joined #gluster
03:56 skoduri joined #gluster
04:00 Shu6h3ndu joined #gluster
04:01 daMaestro joined #gluster
04:06 dominicpg joined #gluster
04:14 ppai joined #gluster
04:21 skumar joined #gluster
04:23 victori joined #gluster
04:36 gyadav joined #gluster
04:46 apandey joined #gluster
04:49 ashiq joined #gluster
04:51 amarts joined #gluster
04:54 amarts joined #gluster
04:56 XpineX joined #gluster
05:01 buvanesh_kumar joined #gluster
05:01 aravindavk joined #gluster
05:13 [diablo] joined #gluster
05:20 ppai joined #gluster
05:25 ashiq joined #gluster
05:25 atinm joined #gluster
05:27 ndarshan joined #gluster
05:34 skoduri joined #gluster
05:36 karthik_us joined #gluster
05:40 ankitr joined #gluster
05:41 sbulage joined #gluster
05:43 Philambdo joined #gluster
05:47 Saravanakmr joined #gluster
05:53 sona joined #gluster
05:56 hgowtham joined #gluster
06:14 jtux joined #gluster
06:15 jtux left #gluster
06:17 susant joined #gluster
06:27 ppai joined #gluster
06:30 atinm joined #gluster
06:30 kotreshhr joined #gluster
06:30 sanoj joined #gluster
06:31 ashiq joined #gluster
06:31 ayaz joined #gluster
06:35 ritzo joined #gluster
06:39 jwd joined #gluster
06:42 aravindavk joined #gluster
06:43 Karan joined #gluster
06:45 jiffin joined #gluster
06:47 msvbhat joined #gluster
06:57 amarts joined #gluster
06:58 nbalacha joined #gluster
06:58 kdhananjay joined #gluster
07:13 MrAbaddon joined #gluster
07:15 flying joined #gluster
07:31 fsimonce joined #gluster
07:40 ivan_rossi joined #gluster
07:53 nishanth joined #gluster
07:54 k4n0 joined #gluster
08:21 jtux joined #gluster
08:22 jtux left #gluster
08:26 nbalacha joined #gluster
08:26 ankitr joined #gluster
08:32 ankitr joined #gluster
08:51 MrAbaddon joined #gluster
08:59 MrAbaddon joined #gluster
09:00 ankitr joined #gluster
09:03 dominicpg joined #gluster
09:16 ankitr joined #gluster
09:21 nielsh joined #gluster
09:25 nielsh Hi all, we upgraded from glusterfs 3.9 to 3.10 last week. Since then we're seeing a huge amount of logs in /var/log/glusterfs/bricks/<brick>.log regarding posix_acl_log_permit_denied. We have about 25G of logs now with these messages in about 1 week time. Does anyone know what could cause this? See the exact msg here: https://pastebin.com/raw/H6QH1gzf
09:25 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:26 kotreshhr joined #gluster
09:26 nielsh https://paste.fedoraproject.org/paste/uQdHIgrDgkp43eKpN7y7W15M1UNdIGYhyRLivL9gydE=/raw there :)
09:27 msvbhat joined #gluster
09:32 nbalacha joined #gluster
09:35 ankitr joined #gluster
09:35 shdeng joined #gluster
09:40 moneylotion joined #gluster
09:41 Saravanakmr joined #gluster
09:42 ankitr joined #gluster
09:46 ivan_rossi left #gluster
09:47 nbalacha joined #gluster
09:48 jkroon joined #gluster
10:07 ankitr joined #gluster
10:09 poornima_ joined #gluster
10:10 _KaszpiR_ hm anyone got issues with glusterfs + lvm thinpools + snapshots?
10:11 _KaszpiR_ we got on one node set up snapshotting, and looks like when snapshot request is invoked, that node gets network ping timeout from other hosts, thus it gets dropped
10:26 ndevos nielsh: the request that caused the logging is a check for world-execute permissions of the file with gfid 50f020b1-da30-4f43-bd2f-507b324d481a
10:27 ndevos nielsh: the req:1 means 001 in permission mode, the file has 644, so 1=X_OK is not set
10:29 ndevos nielsh: something probably does a access("path/to/file", X_OK) call, the application doing so might return an error somewhere?
10:40 Saravanakmr joined #gluster
10:46 derjohn_mob joined #gluster
10:52 ankitr joined #gluster
10:55 mbukatov joined #gluster
11:09 msvbhat joined #gluster
11:09 susant joined #gluster
11:19 msvbhat joined #gluster
11:33 sona joined #gluster
11:33 nielsh ndevos: Hm, not that I can find. It's running some websites, and the sites themselfs run fine without issues. I see messages from the nginx uid and from the user running the PHP code. Running an strace on the UID of in this case a user running the PHP code when the errors are happening I only see mostly F_OK and R_OK, as well as a few W_OK. But not a single X_OK call.
11:38 ankitr joined #gluster
11:40 nielsh I'm guessing it has always been present but from what I can find the logging has only been added in 3.10
11:41 shyam joined #gluster
11:43 kotreshhr joined #gluster
11:50 kkeithley Gluster Community Bug Triage in 10 minutes in #gluster-meeting
11:51 sona joined #gluster
11:57 Wizek_ joined #gluster
11:57 skoduri joined #gluster
12:00 ahino joined #gluster
12:03 amarts joined #gluster
12:16 hybrid512 joined #gluster
12:23 ira joined #gluster
12:51 baber joined #gluster
12:52 kotreshhr left #gluster
12:53 susant left #gluster
12:53 Karan joined #gluster
13:05 msvbhat joined #gluster
13:11 Arrfab joined #gluster
13:13 plarsen joined #gluster
13:14 kpease joined #gluster
13:16 rwheeler joined #gluster
13:18 atinm joined #gluster
13:21 mhutter joined #gluster
13:22 skylar joined #gluster
13:35 dratir joined #gluster
13:46 buvanesh_kumar joined #gluster
13:50 sona joined #gluster
13:56 dratir Hey #gluster! Is there a go-to way to ensure a gluster node is in-sync after a reboot?
13:57 dratir I'm currently doing `gluster volume heal $vol full` over all volumes, follwed by `heal info` until there are no more entries....
13:58 Philambdo joined #gluster
14:12 bit4man joined #gluster
14:14 atinm joined #gluster
14:20 baber joined #gluster
14:25 dratir joined #gluster
14:27 ira joined #gluster
14:27 jdossey joined #gluster
14:30 baber joined #gluster
14:30 farhorizon joined #gluster
14:34 msvbhat joined #gluster
14:36 dratir joined #gluster
14:37 moneylotion joined #gluster
14:38 dratir joined #gluster
14:39 dratir joined #gluster
14:40 dratir df
14:44 ppai joined #gluster
14:45 dratir joined #gluster
14:47 nbalacha joined #gluster
14:50 dratir joined #gluster
14:51 dratir joined #gluster
14:57 shaunm joined #gluster
14:58 susant joined #gluster
15:06 wushudoin joined #gluster
15:06 dratir joined #gluster
15:10 dratir joined #gluster
15:10 Philambdo joined #gluster
15:11 wushudoin joined #gluster
15:12 MessedUpHare joined #gluster
15:13 dratir joined #gluster
15:14 Philambdo1 joined #gluster
15:16 baber joined #gluster
15:19 MessedUpHare left #gluster
15:21 aravindavk joined #gluster
15:21 dratir joined #gluster
15:23 dratir_ joined #gluster
15:25 dratir joined #gluster
15:27 dratir joined #gluster
15:28 dratir joined #gluster
15:30 Philambdo joined #gluster
15:30 msvbhat joined #gluster
15:32 JoeJulian dratir: There's no need to do a "full" heal. Gluster already knows that files need healed and, once the offline server returns it will heal it automatically.
15:33 JoeJulian Best of all, it will only heal the files that actually changed while the server was offline.
15:34 vbellur joined #gluster
15:36 dratir JoeJulian: so after a reboot, once all volumes are running all i have to do is watch heal info?
15:39 dratir Thanks for the info
15:45 susant joined #gluster
15:45 dratir joined #gluster
15:46 dratir joined #gluster
15:49 sbulage joined #gluster
15:49 jiffin joined #gluster
15:58 skoduri joined #gluster
15:58 snehring joined #gluster
15:59 JoeJulian Yep
15:59 JoeJulian And you're welcome.
16:15 jiffin joined #gluster
16:20 riyas joined #gluster
16:30 Gambit15 joined #gluster
16:34 derjohn_mob joined #gluster
16:47 msvbhat joined #gluster
16:52 susant joined #gluster
17:04 gyadav joined #gluster
17:18 jiffin joined #gluster
17:21 riyas joined #gluster
17:22 abyss_ joined #gluster
17:25 ira joined #gluster
17:37 mhutter joined #gluster
17:42 wushudoin joined #gluster
17:55 buvanesh_kumar joined #gluster
18:03 derjohn_mob joined #gluster
18:03 ahino joined #gluster
18:11 pioto joined #gluster
18:14 mhutter` joined #gluster
18:15 mhutter joined #gluster
18:25 tyler274 joined #gluster
18:27 riyas joined #gluster
18:46 mhutter left #gluster
19:05 shyam joined #gluster
19:22 partner thanks, always love a warm welcome!
19:25 ekarlso joined #gluster
19:25 ekarlso Hi
19:25 glusterbot ekarlso: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
19:26 ekarlso can bricks be of mixed sizes if I have mixed disks on servers ?
19:33 baber joined #gluster
19:36 alonso joined #gluster
19:37 alonso Hi. How can I rename a volume?
19:39 msvbhat joined #gluster
19:40 farhorizon joined #gluster
19:40 arpu joined #gluster
19:47 vbellur joined #gluster
20:11 baber joined #gluster
20:18 vbellur joined #gluster
20:25 mhutter joined #gluster
20:33 vbellur1 joined #gluster
20:34 vbellur1 joined #gluster
20:36 major alonso, I dunno that you can directly .. though you could take a snapshot and then clone the snapshot to a new name
20:37 vbellur1 joined #gluster
20:37 alonso Thanks major! I will google on those keywords
20:37 alonso Hold on, do you mean literally doing a "cp -a" into a new volume?
20:37 vbellur joined #gluster
20:37 major no
20:37 mhutter I see a lot of connection errors on ports 49000-49100, but according to the docs gluster uses ports 49152 and higher... Am I missing somethin?
20:38 major gluster volume snapshot create, gluster volume snapshot clone
20:38 alonso Thanks major, I will read on gluster snapshots :)
20:38 major you can clone a snapshot to a new volume name.. the cloned volume is a fully functional read/write volume
20:43 ira joined #gluster
20:48 mhutter oh man, Gluster really needs usefull logs
20:48 mhutter "0-glustershd: Ignore failed connection attempt on , (No such file or directory)" -- what does that even mean?
20:49 ekarlso can bricks be of mixed sizes if I have mixed disks on servers ? ? ^
20:51 major ekarlso, it isn't really recommended..
20:51 major I am not certain Gluster will do anything special should 1 brick become full w/ regards to the rest .. outside of likely have a bad hair day ...
20:52 major but .. it will work .. I am just not certain how it handles the 1 brick going full .. if it handles it at all
21:03 mhutter Ok, Question: My arbiter node does not come up again. I have a ton of log messages that say that there is something wrong, but now what or why
21:03 mhutter what do I do?
21:04 mhutter I have messages like the one above, or like this: "0-socket.management: writev on 172.17.176.128:48612 failed (Broken pipe)
21:04 mhutter no wonder connection failed, because on 172.17.176.128 there is noone listening at 48612
21:05 mhutter or stuff like "Lock for vol gluster-pv45 not held" and "Lock not released for gluster-pv45" which is apparently a warning, but why?
21:05 msvbhat joined #gluster
21:34 major joined #gluster
21:53 MrAbaddon joined #gluster
22:06 plarsen joined #gluster
22:08 vbellur joined #gluster
22:10 vbellur joined #gluster
22:10 shyam joined #gluster
22:17 vbellur joined #gluster
22:17 vbellur joined #gluster
22:53 farhoriz_ joined #gluster
23:04 alonso joined #gluster
23:16 JoeJulian ekarlso: I've seen some creating equal sized partitions on their different sized disks and just putting multiple bricks on the larger disk.
23:16 JoeJulian ~ports | mhutter
23:16 glusterbot mhutter: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
23:17 JoeJulian So yeah, 48612? not sure where that's coming from.
23:35 shyam joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary