Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:34 humblec joined #gluster
01:00 victori joined #gluster
01:01 humblec joined #gluster
01:29 shyam joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 nirokato joined #gluster
02:02 gospod3 joined #gluster
02:28 baber joined #gluster
02:56 victori joined #gluster
03:10 atrius joined #gluster
03:13 ahino joined #gluster
03:13 atinm joined #gluster
03:14 victori joined #gluster
03:21 victori joined #gluster
03:31 psony joined #gluster
03:45 rouven joined #gluster
04:03 msvbhat joined #gluster
04:26 psony_ joined #gluster
05:15 xavih joined #gluster
05:21 logan- joined #gluster
05:22 atinm joined #gluster
05:35 msvbhat joined #gluster
05:53 logan- joined #gluster
06:03 jkroon joined #gluster
06:10 psony_ joined #gluster
06:18 jtux joined #gluster
06:24 renout joined #gluster
06:25 dominicpg joined #gluster
06:28 rwheeler joined #gluster
06:37 [diablo] joined #gluster
06:46 msvbhat joined #gluster
07:12 ivan_rossi joined #gluster
07:12 ivan_rossi left #gluster
07:41 stoff1973 joined #gluster
07:42 mbukatov joined #gluster
07:57 atinm joined #gluster
07:59 fsimonce joined #gluster
08:02 _KaszpiR_ joined #gluster
08:15 lcami1 joined #gluster
08:31 msvbhat joined #gluster
08:36 ahino joined #gluster
08:37 mdavidson joined #gluster
08:38 fenikso joined #gluster
08:46 rwheeler joined #gluster
08:55 rouven joined #gluster
08:59 rouven joined #gluster
09:03 ThHirsch joined #gluster
09:16 nh2 joined #gluster
09:27 msvbhat joined #gluster
10:00 Wizek_ joined #gluster
10:20 ashka joined #gluster
10:20 ashka joined #gluster
10:28 shyam joined #gluster
10:51 bartden joined #gluster
10:51 bartden hi can i mount a gluster 3.10 volume on a gluster client with 3.7.5 installed?
10:52 Wizek_ joined #gluster
11:04 rouven joined #gluster
11:15 msvbhat joined #gluster
11:16 Wizek_ joined #gluster
11:43 skoduri joined #gluster
12:19 kramdoss_ joined #gluster
12:19 rwheeler joined #gluster
12:29 dlambrig joined #gluster
12:33 shyam joined #gluster
12:35 msvbhat joined #gluster
12:42 msvbhat joined #gluster
12:56 dorvan joined #gluster
12:56 dorvan hi all
13:04 ic0n_ joined #gluster
13:05 msvbhat joined #gluster
13:07 jstrunk joined #gluster
13:18 baber joined #gluster
13:24 dominicpg joined #gluster
13:24 lcami1 left #gluster
13:41 skylar joined #gluster
13:44 plarsen joined #gluster
13:54 marlinc joined #gluster
14:00 farhorizon joined #gluster
14:05 hmamtora joined #gluster
14:05 hmamtora_ joined #gluster
14:16 mbukatov joined #gluster
14:17 vbellur joined #gluster
14:21 tannerb3 joined #gluster
14:25 bartden joined #gluster
14:28 major joined #gluster
14:43 rouven_ joined #gluster
14:44 vbellur joined #gluster
14:44 TBlaar2 joined #gluster
14:45 atinm joined #gluster
14:46 mallorn left #gluster
14:50 marbu joined #gluster
14:51 tannerb3 I have one gluster server using a massive amount of inodes. I've checked everything I can think of but haven't been able to find any significant amount except some related to geo rep in /var/lib/misc/glusterfsd. One thought is that some process may be hanging on to deleted FDs?
14:51 tannerb3 to be more specific the inode usage is on my root volume, not a gluster volume
14:56 john joined #gluster
14:58 Guest47834 Hi Guys, I'm trying to setup Gluster 3.12 geo replication on my VM's for testing, but there seems to be issues with geo replication.
14:59 Guest47834 Geo replication status reports as faulty, where i expect 50% active and 50% passive connections
14:59 Guest47834 For ex:
14:59 Guest47834 gluster volume geo-replication gfsvol geo-rep-user@gfs4::gfsvol_rep status  MASTER NODE    MASTER VOL    MASTER BRICK        SLAVE USER      SLAVE                            SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED           ---------------------------------------------------------------------------------------------------------------------------------------------------- gfs3           gfsvol        /gfs/brick1/gv0
14:59 glusterbot Guest47834: --------------------------------------------------------------------------------------------------------------------------------------------------'s karma is now -1
15:00 Guest47834 gfs3           gfsvol        /gfs/brick1/gv0     geo-rep-user    geo-rep-user@gfs4::gfsvol_rep    N/A           Faulty    N/A             N/A                   gfs3           gfsvol        /gfs/brick2/gv0     geo-rep-user    geo-rep-user@gfs4::gfsvol_rep    N/A           Faulty    N/A             N/A                   gfs3           gfsvol        /gfs/arbiter/gv0    geo-rep-user    geo-rep-user@gfs4::gfsvol_rep    N/A           F
15:01 Guest47834 has anyone tested this successfully?
15:02 ivan_rossi joined #gluster
15:02 ivan_rossi left #gluster
15:06 wushudoin joined #gluster
15:08 jkroon joined #gluster
15:09 farhorizon joined #gluster
15:09 tannerb3 Guest47834, check your geo-rep logs /var/log/glusterd/geo-replication/
15:10 Wizek__ joined #gluster
15:14 sage__ joined #gluster
15:17 d0minicpg joined #gluster
15:22 shyam joined #gluster
15:24 Guest47834 i got this in the erros
15:24 Guest47834 [2017-10-02 15:23:14.873392] I [master(/gfs/brick1/gv0):1458:crawl] _GMaster: slave's time  stime=(1506637819, 0) [2017-10-02 15:23:15.420205] I [master(/gfs/brick1/gv0):1860:syncjob] Syncer: Sync Time Taken   duration=0.0725 num_files=98    job=2   return_code=12 [2017-10-02 15:23:15.420499] E [resource(/gfs/brick1/gv0):208:errlog] Popen: command returned error cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-id
15:25 Guest47834 [2017-10-02 15:23:15.436161] I [syncdutils(/gfs/brick1/gv0):271:finalize] <top>: exiting. [2017-10-02 15:23:15.444261] I [repce(/gfs/brick1/gv0):92:service_loop] RepceServer: terminating on reaching EOF. [2017-10-02 15:23:15.444478] I [syncdutils(/gfs/brick1/gv0):271:finalize] <top>: exiting. [2017-10-02 15:23:15.593774] I [master(/gfs/brick2/gv0):1458:crawl] _GMaster: slave's time  stime=(1506637819, 0) [2017-10-02 15:23:15.8460
15:25 Guest47834 [2017-10-02 15:23:15.846012] I [monitor(monitor):363:monitor] Monitor: worker died in startup phase brick=/gfs/brick1/gv0
15:25 Guest47834 [2017-10-02 15:23:15.848267] I [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
15:27 kpease joined #gluster
15:28 Guest47834 it seems like only one brick has the correct amount of files in my case. Other bricks are not replicated correctly in geo-replication
15:53 ThHirsch joined #gluster
16:07 vbellur joined #gluster
16:17 baber joined #gluster
16:22 ronrib_ joined #gluster
16:41 Teraii joined #gluster
16:57 baber joined #gluster
17:16 msvbhat joined #gluster
17:23 arpu joined #gluster
17:35 rwheeler joined #gluster
17:35 vbellur joined #gluster
17:39 rouven_ joined #gluster
17:58 rouven_ joined #gluster
18:12 dude1234 joined #gluster
18:12 dlambrig joined #gluster
18:44 _KaszpiR_ joined #gluster
18:58 vbellur joined #gluster
19:06 msvbhat_ joined #gluster
19:14 ThHirsch joined #gluster
19:17 major joined #gluster
19:43 major joined #gluster
20:18 vbellur joined #gluster
20:54 elitecoder joined #gluster
20:55 elitecoder Hey all, so I'm using glusterfs 3.11.3. I'm wondering if anyone has noticed an issue when doing a 'vol heal full' that either locks up or brings the volume to a crawl. Three times now I've done a heal full after a system update/reboot, and it's taken down our webservers due to gluster responding slowly, apache processes piling up, and then denying new connections.
21:10 msvbhat joined #gluster
21:50 baber joined #gluster
21:55 vbellur joined #gluster
21:57 vbellur1 joined #gluster
21:57 farhorizon joined #gluster
22:00 vbellur joined #gluster
22:12 JoeJulian elitecoder: I did not. I saw some issues with files not showing up, upgraded to 3.12.1 and everything's looking good now.
22:12 msvbhat joined #gluster
22:13 elitecoder JoeJulian: ok.
22:13 msvbhat_ joined #gluster
22:24 vbellur joined #gluster
22:42 flomko joined #gluster
22:49 uebera|| joined #gluster
22:49 uebera|| joined #gluster
23:13 msvbhat joined #gluster
23:17 elitecoder left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary