Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:37 ueberall joined #gluster
00:37 ueberall joined #gluster
01:04 shdeng joined #gluster
01:31 plarsen joined #gluster
01:49 plarsen joined #gluster
01:52 social joined #gluster
01:55 Peppard joined #gluster
02:18 om2 joined #gluster
02:37 loadtheacc joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:50 derjohn_mob joined #gluster
03:06 magrawal joined #gluster
03:13 jiffin joined #gluster
03:25 jiffin joined #gluster
03:46 ppai joined #gluster
03:51 shyam1 joined #gluster
03:51 atinm_ joined #gluster
03:54 aravindavk joined #gluster
03:58 atinm joined #gluster
04:07 gyadav_ joined #gluster
04:09 gyadav joined #gluster
04:10 ppai_ joined #gluster
04:10 atinmu joined #gluster
04:10 lalatend1M joined #gluster
04:10 gyadav_ joined #gluster
04:10 sac` joined #gluster
04:11 shruti` joined #gluster
04:12 pkalever joined #gluster
04:12 shyam joined #gluster
04:12 rjoseph joined #gluster
04:12 gyadav__ joined #gluster
04:13 sac joined #gluster
04:13 gyadav_ joined #gluster
04:15 gyadav__ joined #gluster
04:15 atinmu joined #gluster
04:15 shyam joined #gluster
04:15 rjoseph joined #gluster
04:15 shruti joined #gluster
04:15 victori joined #gluster
04:15 ppai_ joined #gluster
04:16 lalatenduM joined #gluster
04:19 ashiq joined #gluster
04:23 sac joined #gluster
04:28 victori joined #gluster
04:28 Lee1092 joined #gluster
04:29 ndarshan joined #gluster
04:31 victori_ joined #gluster
04:34 jiffin joined #gluster
04:36 kramdoss_ joined #gluster
04:42 skoduri joined #gluster
04:45 Prasad joined #gluster
04:47 nishanth joined #gluster
04:58 Shu6h3ndu_ joined #gluster
05:00 RameshN joined #gluster
05:06 gem joined #gluster
05:06 ankitraj joined #gluster
05:07 karthik_us joined #gluster
05:09 om2 joined #gluster
05:12 k4n0 joined #gluster
05:22 prasanth joined #gluster
05:32 ashiq joined #gluster
05:36 rafi joined #gluster
05:37 kotreshhr joined #gluster
05:47 msvbhat joined #gluster
05:48 gyadav_ joined #gluster
05:53 Alghost joined #gluster
05:54 sanoj joined #gluster
05:58 gyadav__ joined #gluster
05:59 sbulage joined #gluster
06:02 Karan joined #gluster
06:02 gyadav_ joined #gluster
06:03 kdhananjay joined #gluster
06:06 apandey joined #gluster
06:08 rastar joined #gluster
06:08 gyadav__ joined #gluster
06:13 sona joined #gluster
06:17 shyam joined #gluster
06:17 susant joined #gluster
06:18 hgowtham joined #gluster
06:28 k4n0 joined #gluster
06:28 gyadav_ joined #gluster
06:29 owlbot joined #gluster
06:30 percevalbot joined #gluster
06:34 Philambdo joined #gluster
06:35 asriram joined #gluster
06:44 itisravi joined #gluster
06:47 percevalbot joined #gluster
06:48 Humble joined #gluster
06:50 owlbot joined #gluster
06:50 gyadav__ joined #gluster
06:54 k4n0 joined #gluster
07:01 [diablo] joined #gluster
07:10 sbulage joined #gluster
07:14 kraynor5b__ joined #gluster
07:24 mhulsman joined #gluster
07:26 mhulsman joined #gluster
07:27 sbulage joined #gluster
07:33 jtux joined #gluster
07:54 rastar joined #gluster
07:58 mbukatov joined #gluster
08:00 sanoj_ joined #gluster
08:00 darshan joined #gluster
08:00 MusiciAtin joined #gluster
08:00 hgowtham_ joined #gluster
08:00 ppai__ joined #gluster
08:00 kramdoss__ joined #gluster
08:00 gyadav__ joined #gluster
08:01 ankitraj joined #gluster
08:01 pkalever joined #gluster
08:01 Shu6h3ndu joined #gluster
08:01 ashiq joined #gluster
08:01 nishanth joined #gluster
08:01 Karan joined #gluster
08:01 RameshN joined #gluster
08:01 Humble joined #gluster
08:02 sona joined #gluster
08:07 paraenggu joined #gluster
08:12 [diablo] joined #gluster
08:13 malevolent joined #gluster
08:14 karthik_us joined #gluster
08:16 kramdoss__ joined #gluster
08:16 MusiciAtin joined #gluster
08:16 hgowtham_ joined #gluster
08:16 ppai__ joined #gluster
08:16 rastar joined #gluster
08:17 darshan joined #gluster
08:17 jri joined #gluster
08:18 sanoj_ joined #gluster
08:25 armyriad joined #gluster
08:35 fsimonce joined #gluster
08:42 flomko joined #gluster
08:47 olleh joined #gluster
08:47 kraynor5b_ joined #gluster
08:47 olleh Hello, i have gluster-related question. Do you think it is a good idea to run gluster in docker container?
08:52 ivan_rossi joined #gluster
08:54 alezzandro joined #gluster
08:55 flomko Hi all! i have a strange problem with glusterfs. subvolume of my distrbuted don't mount and log tell that "2017-01-10 07:45:03.447311] E [socket.c:2395:socket_connect_finish] 0-massive-client-1: connection to %targetip%:24007 failed (Connection timed out)". But other client can access, this client can make telnet to this port, and firewall is not preventing connection (they all on same subnet).
08:57 olleh Hello, i have gluster-related question. Do you think it is a good idea to run gluster in docker container?
08:59 gls joined #gluster
09:01 mbukatov joined #gluster
09:01 lkoranda joined #gluster
09:04 ivan_rossi olleh: gluster clients or gluster servers? clients OK, servers: people do, but it takes care. Containers are optimal for STATELESS apps. filesystems are definitely not stateless.
09:07 hgowtham joined #gluster
09:08 csaba joined #gluster
09:08 ivan_rossi s/Containers/Docker containers/
09:08 glusterbot What ivan_rossi meant to say was: olleh: gluster clients or gluster servers? clients OK, servers: people do, but it takes care. Docker containers are optimal for STATELESS apps. filesystems are definitely not stateless.
09:10 dspisla joined #gluster
09:12 olleh ivan_rossi: gluster servers
09:16 dspisla Hello, I am from germany and have this issue for discussion: I am running a gluster volume on a CentOS Machine and want to use libgfapi to connect from my CentOS VM to this volume. For this purpose I use the GlusterClient class to connect to this volume. This is my initialization: GlusterClient cl = new GlusterClient("172.30.6.238",  0, "tcp"). But the connection can not establish with this parameters I receive an exception: Except
09:16 dspisla Exception in thread "main" java.io.IOException: Error connecting to gluster volume:172.30.6.238:0/gv0 at org.gluster.fs.GlusterClient.c​onnect(GlusterClient.java:152) at org.gluster.fs.GlusterMain.​main(GlusterMain.java:122)
09:17 dspisla The name of the volume is gv0
09:18 nishanth joined #gluster
09:21 ankitraj joined #gluster
09:21 derjohn_mob joined #gluster
09:26 k4n0 joined #gluster
09:27 rastar dspisla: does ping work for the IP?
09:28 rastar dspisla: try replacing 0 with 24007
09:28 rastar dspisla: check firewall
09:28 rastar dspisla: if fuse mount from same machine works for fuse mount then we have to check java bindings
09:29 dspisla @rastar ping is not working
09:30 rastar dspisla: that must be the problem. please check network config.
09:30 dspisla @rastar Ok, thanks
09:35 jockek joined #gluster
09:35 Telsin joined #gluster
09:35 csaba joined #gluster
09:35 marbu joined #gluster
09:37 wiza_ joined #gluster
09:37 NuxRo joined #gluster
09:37 ketarax joined #gluster
09:37 shortdudey123 joined #gluster
09:37 jesk joined #gluster
09:37 masber joined #gluster
09:37 rossdm joined #gluster
09:39 ItsMe` joined #gluster
09:39 mrEriksson joined #gluster
09:39 mlhess joined #gluster
09:39 logan- joined #gluster
09:39 Igel joined #gluster
09:39 soloslinger joined #gluster
09:39 rofl____ joined #gluster
09:39 kenansulayman joined #gluster
09:39 snixor joined #gluster
09:39 cvstealth joined #gluster
09:39 yoavz joined #gluster
09:39 Ramereth joined #gluster
09:39 Vaelatern joined #gluster
09:39 rideh joined #gluster
09:39 javi404 joined #gluster
09:40 primusinterpares joined #gluster
09:40 bhakti joined #gluster
09:40 squeakyneb joined #gluster
09:40 Slashman joined #gluster
09:40 lkoranda joined #gluster
09:41 eryc joined #gluster
09:41 semiosis joined #gluster
09:41 semiosis joined #gluster
09:42 [o__o] joined #gluster
09:42 flying joined #gluster
09:42 AnkitRaj_ joined #gluster
09:43 PotatoGim joined #gluster
09:44 nishanth joined #gluster
09:44 AnkitRaj_ joined #gluster
09:44 jerrcs_ joined #gluster
09:45 misc joined #gluster
09:46 msvbhat joined #gluster
09:49 RameshN joined #gluster
09:50 ankitraj joined #gluster
09:56 ankitraj joined #gluster
10:02 gem joined #gluster
10:02 msvbhat joined #gluster
10:08 LiberalSquash joined #gluster
10:11 apandey joined #gluster
10:14 nishanth joined #gluster
10:14 BatS9 joined #gluster
10:15 BatS9 ls
10:21 BatS9 I triggered a full heal after replacing 2 bricks, this led to a problem with performance going down the drain. Close to a more or less unusable state so the new bricks were taken offline again. Data was then synced manually to the new bricks.
10:22 BatS9 When I start the server with the 2 bricks in question the performance goes down the drain again as expected and in /indices/xattrop/ from the 2 bricks I'm looking at about 2.5M entrys
10:23 BatS9 Is it possible to cancel the full selfheal and let the bricks do normal healing when I bring them online or do you have some other suggestion?
10:24 paraenggu joined #gluster
10:26 Karan joined #gluster
10:29 BatS9 3.7 as serverversion
10:33 poornima joined #gluster
10:37 RameshN joined #gluster
10:47 Karan joined #gluster
10:50 hybrid512 joined #gluster
10:57 k4n0 joined #gluster
11:03 aravindavk joined #gluster
11:23 Caveat4U joined #gluster
11:25 msvbhat joined #gluster
11:31 hgowtham REMINDER: Gluster Community Bug Triage meeting in ~30 minutes
11:36 itisravi BatS9: If you've restarted the bricks, then it is the normal index heal and not the full heal in action.
11:36 itisravi BatS9: you could disable self-heal daemon and let heals happen via the client side.
11:37 nishanth joined #gluster
11:38 BatS9 hmm
11:38 BatS9 So what are then the 2.5M entrys in Xattrop on Server1(Online server) it seems like they are related to Server4(Currently offline)
11:39 bfoster joined #gluster
11:40 itisravi xattrop entries are the files that need heal.
11:40 itisravi rather the gfids of the files that need heal.
11:40 darshan joined #gluster
11:43 rastar joined #gluster
11:44 bfoster joined #gluster
11:47 BatS9 This is just speculation but is there not a correlation between the system being extremly slow when bringing the new node online and the amount of files in xattrop? I'm basing this on doing a "time ls xattrop/" returning real3m21.255s, thats why I'm asking if it's purgeable or if that would lead to issues.
11:47 BatS9 Or even if this is working as intended
11:56 shyam joined #gluster
11:57 Saravanakmr joined #gluster
11:58 kdhananjay joined #gluster
11:58 itisravi BatS9: yes I'm guessing the correlation is that the self-heal daemon is consuming resources to heal, making the i/o performance slow.
11:59 k4n0 joined #gluster
12:00 itisravi purging is not a good idea. let the client side heals remove them as and when the files get healed.
12:00 * itisravi gtg now
12:05 BatS9 Ok thanks for your aswer itisravi :)
12:09 lalatenduM joined #gluster
12:09 ankitraj joined #gluster
12:09 rastar joined #gluster
12:09 jiffin joined #gluster
12:09 pkalever joined #gluster
12:09 LiberalSquash joined #gluster
12:10 susant joined #gluster
12:10 sac joined #gluster
12:12 darshan joined #gluster
12:12 dspisla joined #gluster
12:13 dspisla left #gluster
12:17 jiffin1 joined #gluster
12:20 poornima joined #gluster
12:21 ppai__ joined #gluster
12:23 kotreshhr joined #gluster
12:34 jiffin1 joined #gluster
12:40 gem joined #gluster
12:51 BatS9_ joined #gluster
12:52 Gambit15 I've got some odd behaviour going on with one of my volumes. There are 3 volumes & each server (2x(2+1)) hosts a brick for each volume. For one of these volumes, 3 of the bricks are down (I think), but that doesn't entirely make sense, because all of the other bricks on the servers are fine, and they're on the same physical volume.
12:52 Gambit15 ...hope that makes sense
12:53 Gambit15 I'm unable to get any data about the volume from the CLI, "gluster volume status data" just returns "Another transaction is in progress for data. Please try again after sometime."
12:54 Gambit15 It's been giving the same response for some 24hrs now
12:55 Gambit15 Yesterday, "gluster volume heal data info" showed there were a couple of files being synced. Today, it shows different files on different bricks
12:56 BatS9 joined #gluster
13:10 ashiq joined #gluster
13:10 Wizek joined #gluster
13:20 rastar joined #gluster
13:25 ppai__ joined #gluster
13:30 rwheeler joined #gluster
13:38 rwheeler joined #gluster
13:44 unclemarc joined #gluster
13:44 nishanth joined #gluster
13:47 poornima joined #gluster
13:54 Wizek_ joined #gluster
14:01 susant left #gluster
14:16 squizzi joined #gluster
14:34 plarsen joined #gluster
14:37 kpease joined #gluster
14:44 kpease joined #gluster
14:44 skylar joined #gluster
14:45 kpease_ joined #gluster
14:45 msvbhat joined #gluster
14:54 shaunm joined #gluster
14:58 Wizek_ joined #gluster
15:01 jarbod_ joined #gluster
15:01 plarsen joined #gluster
15:16 om2 joined #gluster
15:19 Gambit15 joined #gluster
15:20 jiffin joined #gluster
15:27 susant joined #gluster
15:41 farhorizon joined #gluster
15:46 farhoriz_ joined #gluster
15:47 sbulage joined #gluster
15:52 vbellur joined #gluster
15:56 ankitraj joined #gluster
16:02 farhorizon joined #gluster
16:05 farhorizon joined #gluster
16:13 ira joined #gluster
16:13 mb_ joined #gluster
16:20 RameshN joined #gluster
16:21 Caveat4U joined #gluster
16:23 Acinonyx joined #gluster
16:26 mhulsman joined #gluster
16:30 kpease joined #gluster
16:35 primehaxor joined #gluster
16:36 soloslinger joined #gluster
16:37 rastar joined #gluster
16:37 mb_ joined #gluster
16:38 alvinstarr joined #gluster
16:54 jdossey joined #gluster
16:57 kpease joined #gluster
17:13 nishanth joined #gluster
17:16 kpease_ joined #gluster
17:28 gls joined #gluster
17:30 k4n0 joined #gluster
17:31 ankitraj joined #gluster
17:50 bbooth joined #gluster
17:59 vbellur joined #gluster
18:00 farhoriz_ joined #gluster
18:02 unclemarc joined #gluster
18:02 rastar joined #gluster
18:04 Karan joined #gluster
18:05 Karan joined #gluster
18:12 ashiq joined #gluster
18:37 sona joined #gluster
18:51 k4n0 joined #gluster
18:51 mb_ joined #gluster
18:52 om2 joined #gluster
18:56 ashiq_ joined #gluster
19:00 msvbhat joined #gluster
19:03 farhorizon joined #gluster
19:12 asriram joined #gluster
19:16 pulli joined #gluster
19:16 mhulsman joined #gluster
19:23 bbooth joined #gluster
19:35 jdossey joined #gluster
19:35 ahino joined #gluster
19:46 javi404 joined #gluster
20:01 farhoriz_ joined #gluster
20:09 derjohn_mob joined #gluster
20:30 pulli joined #gluster
20:34 bbooth joined #gluster
20:47 mhulsman joined #gluster
20:59 Philambdo joined #gluster
21:05 BlackoutWNCT1 joined #gluster
21:06 john51 joined #gluster
21:07 social joined #gluster
21:09 timotheus1 joined #gluster
21:10 timotheus1 joined #gluster
21:19 kpease joined #gluster
21:22 shaunm joined #gluster
21:32 mhulsman joined #gluster
21:38 JoeJulian Gambit15: you can safely restart glusterd. That will allow management commands to succeed again.
21:38 JoeJulian @paste
21:38 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
21:59 skylar joined #gluster
22:00 Caveat4U @JoeJulian I need to change the hostname of a gluster brick. Basically, rather than using gluster.nmdev.us, I'll be using gluster.staging. It's the same brick, same machine, just a different DNS zone
22:00 Caveat4U When I look around for renaming/repointing instructions, it seems like everyone that tries ends up blowing up something
22:00 Caveat4U I'm spinning up a test cluster right now
22:00 Caveat4U Would you have any tips?
22:04 bbooth joined #gluster
22:18 farhorizon joined #gluster
22:18 cliluw joined #gluster
22:27 jobewan joined #gluster
22:31 JoeJulian Caveat4U: Just stop the volume, stop glusterd, do a sed replace of the hostname (ie find /var/lib/glusterd -type f -exec sed -i ...), then start everything back up again.
22:34 Caveat4U @JoeJulian I'm running a replicated environment with 2 bricks. That should mean that if I stop glusterd on one, the gluster mount will still function, right?
22:35 JoeJulian You'll need to stop the volume if you're going to change the hostnames.
22:35 Caveat4U oh
22:35 JoeJulian So no, it will be down.
22:35 Caveat4U Well
22:35 Caveat4U fudge
22:35 JoeJulian :(
22:35 Caveat4U Always up environment
22:35 JoeJulian cnames?
22:36 Caveat4U yea
22:36 JoeJulian Nah, that won't really do anything for you either.
22:36 Caveat4U I was thinking about doing a replace brick
22:36 Caveat4U It seemed like overkill
22:36 JoeJulian Can't replace in-place (yet).
22:36 Caveat4U I'd love to bump this guy: https://bugzilla.redhat.co​m/show_bug.cgi?id=1038866
22:36 Caveat4U :-P
22:36 glusterbot Bug 1038866: low, unspecified, ---, bugs, NEW , [FEAT] command to rename peer hostname
22:37 Caveat4U For the...5 of us that have had to do it
22:37 Caveat4U According to my exhaustive crawl of gluster message boards
22:37 JoeJulian It comes up pretty regularly.
22:37 Caveat4U Rly? Hooray! I'm not alone!
22:37 JoeJulian Apparently a lot of people don't plan ahead. ;)
22:37 Caveat4U If I *was able to take the entire volume offline
22:38 Caveat4U And do the find replace and just reboot
22:38 JoeJulian You could do that, though reboot is overkill.
22:38 Caveat4U Are there any other directories I need to scower besides /var/lib/glusterd?
22:38 JoeJulian No
22:38 Caveat4U This feels absolutely terrifying :-)
22:39 arpu joined #gluster
22:48 Caveat4U joined #gluster
22:59 jdossey joined #gluster
23:25 PotatoGim joined #gluster
23:26 AppStore joined #gluster
23:47 Acinonyx joined #gluster
23:59 Caveat4U joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary