Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 kramdoss_ joined #gluster
00:02 elitecoder Should I use 3.11 for my live servers or 3.10
00:02 elitecoder ?
00:05 elitecoder I'm setting them up now, finally upgrading to ubuntu 1604
00:06 wushudoin| joined #gluster
00:15 farhorizon joined #gluster
00:22 wushudoin joined #gluster
00:22 daMaestro joined #gluster
00:42 elitecoder Hmm I see .11 is a short term maintenance...
00:50 johnnyNumber5 joined #gluster
00:59 arpu joined #gluster
01:52 ilbot3 joined #gluster
01:52 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 prasanth joined #gluster
02:18 rastar joined #gluster
02:19 bbabich__ joined #gluster
02:19 Kassandry joined #gluster
02:19 bwerthmann joined #gluster
02:20 fbred joined #gluster
02:24 decayofmind joined #gluster
02:25 yalu joined #gluster
02:27 shdeng joined #gluster
02:36 caitnop joined #gluster
02:41 nathwill joined #gluster
02:53 om3 joined #gluster
02:59 BlackoutWNCT joined #gluster
03:01 victori joined #gluster
03:02 BlackoutWNCT Hey Guys, I'm seeing a lot of the following in the "etc-glusterfs-glusterd.vol.log" and was hoping someone could help me out with resolving this. Setup is a replica 2 running Glusterfs 3.8.5 on ubuntu 14.04.1
03:02 BlackoutWNCT [2017-07-26 03:12:14.968908] E [rpcsvc.c:560:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
03:02 BlackoutWNCT [2017-07-26 03:13:38.366320] W [rpcsvc.c:265:rpcsvc_program_actor] 0-rpc-service: RPC program not available (req 1298437 330) for 192.168.0.205:1023
03:02 CP joined #gluster
03:02 kramdoss_ joined #gluster
03:03 CP Hi everyone.  I have a problem I need help with.  The Glusterd service on my replicated store will not start up and I'm not sure why.
03:04 CP I recently had a hardware failure and when I started up the first of three nodes the service fails to start on it.
03:09 BlackoutWNCT CP, do your logs say anything which may be of assistance?
03:13 ashiq joined #gluster
03:14 CP "unregistered authentication agent for unix-process"
03:17 CP that was from journalctl -xe
03:18 BlackoutWNCT Check your gluster logs under /var/log/glusterfs
03:20 CP in the glustershd.log I see " ... remote operation failed. Path:  ... "
03:29 BlackoutWNCT iirc the glustershd.log is for the self heal daemon, and so probably won't help you in this case. Try looking at the "etc-glusterfs-glusterd.vol.log" or the brick logs
03:32 daMaestro joined #gluster
03:36 om2 joined #gluster
03:37 CP the log "etc-glusterfs-glusterd.vol.log" says "0-management: Failed to set keep-alive:  Invalid Argument"
03:38 johnnyNumber5 joined #gluster
03:46 riyas joined #gluster
03:50 nbalacha joined #gluster
04:06 ppai joined #gluster
04:06 johnnyNumber5 joined #gluster
04:07 itisravi joined #gluster
04:09 jiffin joined #gluster
04:11 om2 joined #gluster
04:22 deep-book-gk_ joined #gluster
04:22 deep-book-gk_ left #gluster
04:30 atinm joined #gluster
04:43 Shu6h3ndu joined #gluster
04:47 nathwill joined #gluster
04:47 om2 joined #gluster
04:53 karthik_us joined #gluster
04:59 susant joined #gluster
05:07 snehring joined #gluster
05:11 apandey joined #gluster
05:15 buvanesh_kumar joined #gluster
05:21 ndarshan joined #gluster
05:22 Saravanakmr joined #gluster
05:26 skumar joined #gluster
05:27 sanoj joined #gluster
05:28 sahina joined #gluster
05:30 apandey_ joined #gluster
05:36 nishanth joined #gluster
05:40 BitByteNybble110 joined #gluster
05:42 prasanth joined #gluster
05:43 atinm joined #gluster
05:46 karthik_us joined #gluster
05:48 kramdoss_ joined #gluster
05:49 apandey__ joined #gluster
05:50 skoduri joined #gluster
05:55 Prasad joined #gluster
05:57 atalur joined #gluster
06:03 kdhananjay joined #gluster
06:04 hgowtham joined #gluster
06:05 ankitr joined #gluster
06:05 armyriad joined #gluster
06:05 shdeng joined #gluster
06:05 msvbhat joined #gluster
06:08 rafi1 joined #gluster
06:14 jtux joined #gluster
06:16 atalur_ joined #gluster
06:17 poornima joined #gluster
06:21 kramdoss_ joined #gluster
06:30 buvanesh_kumar joined #gluster
06:34 atinm joined #gluster
06:42 karthik_us joined #gluster
06:44 sona joined #gluster
06:45 kotreshhr joined #gluster
06:53 buvanesh_kumar joined #gluster
06:57 ivan_rossi joined #gluster
06:58 om2 joined #gluster
07:09 mbukatov joined #gluster
07:18 aravindavk joined #gluster
07:26 bbabich_ joined #gluster
07:32 rastar joined #gluster
07:37 TBlaar joined #gluster
07:40 marbu joined #gluster
07:44 jkroon joined #gluster
07:48 kramdoss_ joined #gluster
08:00 om2 joined #gluster
08:07 nathwill joined #gluster
08:08 nathwill joined #gluster
08:09 nathwill joined #gluster
08:10 nathwill joined #gluster
08:11 nathwill joined #gluster
08:17 itisravi joined #gluster
08:56 _KaszpiR_ joined #gluster
08:56 Saravanakmr joined #gluster
08:56 Acinonyx joined #gluster
08:58 madwizard joined #gluster
09:01 mbukatov joined #gluster
09:02 sahina joined #gluster
09:06 quant joined #gluster
09:08 sanoj joined #gluster
09:08 [diablo] joined #gluster
09:09 buvanesh_kumar joined #gluster
09:12 nathwill joined #gluster
09:15 msvbhat joined #gluster
09:22 nbalacha joined #gluster
09:22 Champi_ joined #gluster
09:22 jarbod__ joined #gluster
09:22 yawkat` joined #gluster
09:22 lucasrolff joined #gluster
09:23 major joined #gluster
09:24 TBlaar joined #gluster
09:24 brayo joined #gluster
09:26 decayofmind joined #gluster
09:31 DV joined #gluster
09:33 rafi joined #gluster
09:41 rafi2 joined #gluster
09:43 kdhananjay joined #gluster
09:56 hgowtham joined #gluster
09:58 kramdoss_ joined #gluster
10:02 toredl joined #gluster
10:03 skoduri joined #gluster
10:03 toredl left #gluster
10:08 nbalacha joined #gluster
10:08 DV joined #gluster
10:11 om2 joined #gluster
10:13 om2 joined #gluster
10:16 toredl joined #gluster
10:17 toredl left #gluster
10:22 sahina joined #gluster
10:22 om2_ joined #gluster
10:35 DV joined #gluster
10:49 major joined #gluster
11:04 kdhananjay joined #gluster
11:08 rafi joined #gluster
11:14 itisravi joined #gluster
11:20 om3 joined #gluster
11:37 apandey joined #gluster
11:39 om2_ joined #gluster
11:43 baber joined #gluster
11:43 om3 joined #gluster
12:14 nathwill joined #gluster
12:15 jiffin1 joined #gluster
12:22 om2 joined #gluster
12:24 nbalacha joined #gluster
12:24 jiffin1 joined #gluster
12:28 Acinonyx joined #gluster
12:29 aravindavk joined #gluster
12:46 aravindavk joined #gluster
12:49 kotreshhr left #gluster
13:10 kramdoss_ joined #gluster
13:10 aravindavk joined #gluster
13:15 mbukatov joined #gluster
13:16 nathwill joined #gluster
13:16 skylar joined #gluster
13:16 aravindavk joined #gluster
13:29 aravindavk joined #gluster
13:41 plarsen joined #gluster
13:44 aravindavk joined #gluster
13:49 aravindavk joined #gluster
13:49 johnnyNumber5 joined #gluster
13:55 aravindavk joined #gluster
13:58 msvbhat joined #gluster
14:03 kkeithley @php
14:03 glusterbot kkeithley: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
14:03 glusterbot kkeithley: --fopen-keep-cache
14:05 om2 joined #gluster
14:06 aravindavk joined #gluster
14:16 nathwill joined #gluster
14:19 sona joined #gluster
14:25 aardbolreiziger joined #gluster
14:32 johnnyNumber5 joined #gluster
14:33 kshlm joined #gluster
14:42 aravindavk joined #gluster
14:50 brian83 joined #gluster
14:55 aravindavk joined #gluster
15:00 farhorizon joined #gluster
15:00 wushudoin joined #gluster
15:17 nathwill joined #gluster
15:21 jstrunk joined #gluster
15:22 aravindavk joined #gluster
15:28 jstrunk joined #gluster
15:29 nathwill joined #gluster
15:40 nbalacha joined #gluster
15:42 ashiq joined #gluster
15:48 ic0n joined #gluster
15:53 bowhunter joined #gluster
15:54 WebertRLZ joined #gluster
15:55 Drankis joined #gluster
16:11 aravindavk joined #gluster
16:16 rastar joined #gluster
16:22 deniszh joined #gluster
16:25 aravindavk joined #gluster
16:40 susant joined #gluster
16:40 ankitr joined #gluster
16:41 sona joined #gluster
16:42 msvbhat joined #gluster
16:43 nirokato joined #gluster
16:43 hvisage joined #gluster
16:49 NuxRo joined #gluster
16:50 ivan_rossi left #gluster
16:51 johnnyNumber5 joined #gluster
17:00 atalur joined #gluster
17:02 BlackoutWNCT joined #gluster
17:09 kpease joined #gluster
17:15 Shu6h3ndu joined #gluster
17:21 johnnyNumber5 joined #gluster
17:37 dgandhi joined #gluster
17:39 msvbhat joined #gluster
17:51 baber joined #gluster
17:51 MrAbaddon joined #gluster
17:55 msvbhat joined #gluster
17:59 sona joined #gluster
18:00 plarsen joined #gluster
18:04 ahino joined #gluster
18:12 cholcombe i know gluster doesn't support this but what happens in the case of heterogenous brick sizes?  Does it use the smallest to size against, largest or something else?
18:19 johnnyNumber5 joined #gluster
18:20 cholcombe nvm i can see the hash algo is going to overweight some bricks which will cause issues
18:24 jbrooks joined #gluster
18:34 valkyr3e joined #gluster
18:36 atalur joined #gluster
18:42 johnnyNumber5 joined #gluster
18:44 nirokato joined #gluster
19:01 jbrooks joined #gluster
19:14 madwizard joined #gluster
19:41 jbrooks joined #gluster
20:05 yoavz Hey, I have a big issue and I hope someone here can assist.
20:06 yoavz We deleted by mistake the files directly from the brick directories but now we have the space of the deleted files taken by the .glusterfs directory inside the brick directory.
20:06 yoavz Is there any way I can restore it? It's a 2 node cluster and this volume is ~10TB of size and we really need to get this restored :[
20:08 nathwill joined #gluster
20:20 baber joined #gluster
20:32 nathwill joined #gluster
20:40 farhorizon joined #gluster
20:50 bowhunter joined #gluster
21:41 subscope joined #gluster
21:42 W_v_D joined #gluster
22:18 farhorizon joined #gluster
22:54 vbellur yoavz: do you have files on the other brick?
22:57 vbellur yoavz: if you do, I would recommend doing "find . | xargs stat" from the root of a fuse mount point
23:29 brian83 joined #gluster
23:58 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary