Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-12-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 d0nn1e joined #gluster
00:34 haomaiwang joined #gluster
00:46 shdeng joined #gluster
00:51 shdeng joined #gluster
01:27 cliluw joined #gluster
01:28 haomaiwang joined #gluster
01:54 Lee1092 joined #gluster
02:04 shdeng joined #gluster
02:08 derjohn_mobi joined #gluster
02:14 haomaiwang joined #gluster
02:19 bwerthmann joined #gluster
02:27 p7mo joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:56 shaunm joined #gluster
03:14 haomaiwang joined #gluster
03:22 magrawal joined #gluster
03:22 Wizek joined #gluster
03:39 nishanth joined #gluster
03:43 riyas joined #gluster
04:01 buvanesh_kumar joined #gluster
04:04 vbellur joined #gluster
04:07 masber joined #gluster
04:10 shubhendu joined #gluster
04:10 itisravi joined #gluster
04:14 haomaiwang joined #gluster
04:15 aravindavk joined #gluster
04:18 Karan joined #gluster
04:19 atinm joined #gluster
04:23 rwheeler joined #gluster
04:27 k4n0 joined #gluster
04:28 vbellur joined #gluster
04:32 shubhendu joined #gluster
04:33 kramdoss_ joined #gluster
04:44 sbulage joined #gluster
04:46 apandey joined #gluster
04:50 Prasad joined #gluster
04:52 Wizek joined #gluster
04:54 ndarshan joined #gluster
04:55 RameshN joined #gluster
04:59 nishanth joined #gluster
05:02 rafi joined #gluster
05:04 k4n0 joined #gluster
05:04 k4n0 joined #gluster
05:13 prasanth joined #gluster
05:14 haomaiwang joined #gluster
05:17 pdrakewe_ joined #gluster
05:20 ankitraj joined #gluster
05:26 aravindavk joined #gluster
05:29 alvinstarr joined #gluster
05:30 ShwethaHP joined #gluster
05:31 sanoj joined #gluster
05:36 javi404 joined #gluster
05:38 karthik_us joined #gluster
05:41 jiffin joined #gluster
05:42 hchiramm joined #gluster
05:42 susant joined #gluster
05:47 kotreshhr joined #gluster
05:51 ppai joined #gluster
05:54 Saravanakmr joined #gluster
05:55 ashiq joined #gluster
06:05 ShwethaHP joined #gluster
06:13 skoduri joined #gluster
06:13 hgowtham joined #gluster
06:14 haomaiwang joined #gluster
06:16 arc0 joined #gluster
06:20 msvbhat joined #gluster
06:25 edong23 joined #gluster
06:29 Philambdo joined #gluster
06:43 aravindavk joined #gluster
06:48 ppai joined #gluster
06:51 RameshN joined #gluster
06:53 itisravi joined #gluster
06:56 sbulage joined #gluster
07:10 mhulsman joined #gluster
07:11 mhulsman1 joined #gluster
07:14 haomaiwang joined #gluster
07:21 aravindavk joined #gluster
07:22 itisravi_ joined #gluster
07:23 msvbhat joined #gluster
07:24 tg2 joined #gluster
07:24 ppai joined #gluster
07:30 jkroon joined #gluster
07:32 alvinstarr joined #gluster
07:34 prth joined #gluster
07:34 RameshN joined #gluster
07:37 jtux joined #gluster
07:40 mhulsman joined #gluster
07:46 devyani7 joined #gluster
07:51 arc0 joined #gluster
07:52 mhulsman joined #gluster
07:52 ivan_rossi joined #gluster
08:01 Muthu joined #gluster
08:03 buvanesh_kumar joined #gluster
08:06 jri joined #gluster
08:09 jri_ joined #gluster
08:09 jkroon_ joined #gluster
08:10 mhulsman1 joined #gluster
08:11 [diablo] joined #gluster
08:13 mhulsman joined #gluster
08:14 haomaiwang joined #gluster
08:21 kramdoss_ joined #gluster
08:23 jkroon_ joined #gluster
08:41 mhulsman joined #gluster
08:52 fsimonce joined #gluster
08:54 ahino joined #gluster
08:57 riyas joined #gluster
09:04 poornima_ joined #gluster
09:05 ndarshan joined #gluster
09:09 Marbug joined #gluster
09:09 jkroon_ joined #gluster
09:10 mhulsman joined #gluster
09:14 haomaiwang joined #gluster
09:15 DaKnOb joined #gluster
09:16 panina joined #gluster
09:21 mhulsman joined #gluster
09:22 skoduri joined #gluster
09:24 mhulsman joined #gluster
09:27 derjohn_mobi joined #gluster
09:30 hybrid512 joined #gluster
09:31 DV joined #gluster
09:32 mhulsman joined #gluster
09:34 mhulsman joined #gluster
09:36 karthik_us joined #gluster
09:41 mhulsman joined #gluster
09:45 sanoj joined #gluster
09:45 mhulsman joined #gluster
09:49 flying joined #gluster
09:50 mhulsman joined #gluster
09:51 mhulsman joined #gluster
10:00 atinm joined #gluster
10:02 hackman joined #gluster
10:07 ppai joined #gluster
10:14 haomaiwang joined #gluster
10:16 kayn joined #gluster
10:17 cholcombe joined #gluster
10:20 kayn Hey guys. I'm running gluster v3.5.1 (the latest version from http://ftp.fau.de/centos/6/s​torage/x86_64/gluster-3.8/) on Centos6 and gluster crashes time to time with the following backtrace: http://pastebin.com/raw/ZE3Zuu4t
10:20 kayn It smells to me like a memory leak...
10:20 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
10:21 kayn glusterbot: sorry, I will do next time ;-)
10:23 Slashman joined #gluster
10:23 jtux joined #gluster
10:24 kayn * the version 3.8.5-1
10:29 prth joined #gluster
10:43 cholcombe joined #gluster
10:50 buvanesh_kumar_ joined #gluster
10:55 Philambdo joined #gluster
11:14 haomaiwang joined #gluster
11:16 haomaiwang joined #gluster
11:29 prth joined #gluster
11:31 bfoster joined #gluster
11:33 jkroon_ joined #gluster
11:37 bfoster joined #gluster
11:38 mahendratech joined #gluster
11:42 mahendratech Hi any can suggest, best practice to backup gluster volume
11:46 msvbhat joined #gluster
11:48 Prasad joined #gluster
11:50 f0rpaxe joined #gluster
11:51 riyas joined #gluster
11:53 atinm joined #gluster
11:54 Saravanakmr #REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ( start ~in 5 minutes) in #gluster-meeting
11:55 mahendratech joined #gluster
12:09 ppai joined #gluster
12:11 fsimonce joined #gluster
12:22 buvanesh_kumar joined #gluster
12:31 rwheeler joined #gluster
12:32 jiffin1 joined #gluster
12:47 atinm joined #gluster
12:47 skoduri joined #gluster
12:52 johnmilton joined #gluster
12:52 ira joined #gluster
12:55 B21956 joined #gluster
12:56 nishanth joined #gluster
12:58 arpu joined #gluster
13:10 unclemarc joined #gluster
13:22 mhulsman joined #gluster
13:24 B21956 joined #gluster
13:27 ahino1 joined #gluster
13:30 grac joined #gluster
13:48 jiffin1 joined #gluster
13:57 skoduri joined #gluster
14:00 virusuy joined #gluster
14:12 sbulage joined #gluster
14:14 ahino joined #gluster
14:20 haomaiwang joined #gluster
14:23 mhulsman joined #gluster
14:23 plarsen joined #gluster
14:35 virusuy joined #gluster
14:37 shyam joined #gluster
14:38 skylar joined #gluster
14:39 kramdoss_ joined #gluster
14:51 bwerthmann joined #gluster
15:06 dnorman joined #gluster
15:14 dnorman joined #gluster
15:14 haomaiwang joined #gluster
15:20 jkroon joined #gluster
15:24 mhulsman1 joined #gluster
15:25 bwerthma1n joined #gluster
15:29 mhulsman joined #gluster
15:31 shubhendu joined #gluster
15:33 Gambit15 joined #gluster
15:34 squizzi joined #gluster
15:39 kayn left #gluster
15:40 hchiramm joined #gluster
15:42 farhorizon joined #gluster
15:42 Jacob843 joined #gluster
15:44 shubhendu joined #gluster
15:48 squizzi joined #gluster
15:51 avati joined #gluster
15:51 * avati is backkk
15:54 derjohn_mobi joined #gluster
15:59 dnorman joined #gluster
16:02 susant joined #gluster
16:04 rastar joined #gluster
16:06 loadtheacc joined #gluster
16:08 k4n0 joined #gluster
16:08 wushudoin joined #gluster
16:14 haomaiwang joined #gluster
16:15 mhulsman joined #gluster
16:26 riyas joined #gluster
16:30 mhulsman joined #gluster
16:30 mhulsman joined #gluster
16:38 RameshN joined #gluster
16:39 DaKnOb joined #gluster
16:40 aronnax joined #gluster
16:43 mhulsman joined #gluster
16:44 armin joined #gluster
16:47 mhulsman joined #gluster
16:51 armin joined #gluster
17:00 rwheeler joined #gluster
17:03 farhoriz_ joined #gluster
17:04 farhorizon joined #gluster
17:04 rafi joined #gluster
17:10 kraynor5b joined #gluster
17:11 kraynor5b joined #gluster
17:13 susant joined #gluster
17:13 dnorman joined #gluster
17:14 haomaiwang joined #gluster
17:28 jiffin joined #gluster
17:37 Karan joined #gluster
17:37 mhulsman joined #gluster
17:41 skoduri joined #gluster
17:42 mhulsman joined #gluster
17:48 bmind joined #gluster
17:51 mhulsman joined #gluster
17:51 bmind Hi folks - hoping someone can get me pointed in the right direction? been googling with no luck for a while
17:52 mhulsman joined #gluster
17:52 bmind Seems like a stuck self-heal process, statedump shows
17:52 bmind xlator.feature.locks.lock-dump.domain.entr​ylk.entrylk[0](ACTIVE)=type=ENTRYLK_WRLCK on basename=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, pid = 18446744073709551610, owner=cc35b4daae7f0000, client=0x7f56a402d4e0, connect
17:52 bmind This is glusterfs-3.7.14-1.el7.x86_64
17:53 bmind If I try to clear this I get:
17:53 bmind # gluster volume clear-locks gdata0 /opt/cms/qa/flx/site/datastore/d1/e1/41 kind all entry aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa​aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Volume clear-locks unsuccessful clear-locks getxattr command failed. Reason: Numerical r
17:54 bmind Been locked for like four days >.<
17:55 ndevos that is definitely wrong, the "aaaa...." is most likely not a valid filename in your volume, and the PID is waaaaaaay to hig
17:56 ndevos hh
17:58 ndevos you should file a bug, and maybe you can create a core for debugging with the 'gcore' command (from the gdb RM)
17:58 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:58 ndevos ... (from the gdb RPM) even
17:58 farhoriz_ joined #gluster
18:01 ivan_rossi left #gluster
18:03 bmind ok ty
18:04 bmind left #gluster
18:07 msvbhat joined #gluster
18:08 jiffin joined #gluster
18:08 Wizek joined #gluster
18:10 DaKnOb joined #gluster
18:14 haomaiwang joined #gluster
18:18 jiffin joined #gluster
18:26 mhulsman joined #gluster
18:51 farhorizon joined #gluster
19:04 B21956 joined #gluster
19:08 ahino joined #gluster
19:14 haomaiwang joined #gluster
19:16 mhulsman joined #gluster
19:17 mhulsman joined #gluster
19:28 derjohn_mobi joined #gluster
19:29 Gambit15 joined #gluster
19:31 farhorizon joined #gluster
19:35 ahino joined #gluster
19:35 kenansulayman joined #gluster
19:36 jesk joined #gluster
20:04 d0nn1e joined #gluster
20:11 hackman joined #gluster
20:14 haomaiwang joined #gluster
20:19 johnmilton joined #gluster
20:23 dnorman joined #gluster
20:24 haomaiwa_ joined #gluster
20:30 Philambdo joined #gluster
20:36 msvbhat joined #gluster
20:47 kraynor5b joined #gluster
20:49 kraynor5b joined #gluster
20:50 kraynor5b what's new in the world of gluster?
20:54 derjohn_mobi joined #gluster
21:04 panina joined #gluster
21:08 derjohn_mobi joined #gluster
21:14 haomaiwang joined #gluster
21:20 mhulsman joined #gluster
21:42 msvbhat joined #gluster
21:45 dnorman joined #gluster
21:51 Utoxin Is it possible to do a rolling upgrade from 3.5.1 to 3.8.6?
21:51 JoeJulian It should be. Upgrade servers before clients. Test first. ;)
21:52 Utoxin Hmmm. Interesting. I was poking at one of the servers, testing the self-heal disable commands that the upgrade guide suggests, and it claims that I have connected clients that don't support that feature.
21:52 Utoxin So I was starting to think I needed to upgrade clients first.
21:53 JoeJulian It's a bigger problem if you have a client that needs a feature the server doesn't have.
21:53 Utoxin Fair enough. Hmmm.
21:54 JoeJulian If the client doesn't support a feature you're trying to enable, you'll have to wait until everything's upgraded to enable that feature.
21:54 bwerthmann joined #gluster
21:54 Utoxin So... with 3.5.1 clients, do I need to worry about disabling the client self heal then?
21:55 Utoxin (I've been trying to research this upgrade process for nearly a week, and haven't had much luck with searches for these questions yet.)
21:56 JoeJulian There was a problem with queues not getting processed because they were too low of a priority. That problem was identified and I'm pretty sure it was fixed.
21:56 JoeJulian Yeah, no problem. We don't bite often.
21:57 Utoxin Heh. Alright. *ponders a way he can do a test of this process* Gah. I really wish this client had better backups, or a reasonable way to /make/ better backups.
21:58 JoeJulian Backups can be hard, depending on the amount of data and throughput...
21:58 Utoxin Is there a way to tell if it's going to go wrong before things burn to the ground? Current setup is a 2x3 distributed-replicated cluster. If I upgrade one node, will I be able to tell if there's going to be problems?
21:59 JoeJulian Yes. What I encountered with regard to self-heals was that one server would stop responding once a client touched a file that needed healing.
21:59 JoeJulian That should be fairly easy to watch for.
22:00 Utoxin Alright. Thanks for the time. I'm going to consider if I can do /something/ for backup, and then later this evening, bite the bullet and try to upgrade a node. :)
22:01 Utoxin Hmmm. Worst case... rackspace can snapshot these volumes for me, it looks like.
22:02 Utoxin Turn off gluster node, snapshot volume, do upgrade, see if the world burns.
22:03 * Utoxin goes idle. Thanks again!
22:04 JoeJulian You're welcome
22:05 farhoriz_ joined #gluster
22:14 haomaiwang joined #gluster
22:16 quest`` joined #gluster
22:16 quest`` hello all
22:19 quest`` left #gluster
22:43 jeremyh joined #gluster
22:46 farhorizon joined #gluster
22:49 farhorizon joined #gluster
22:54 farhorizon joined #gluster
22:58 kraynor5b JoeJulian:  I have a question for you.
22:58 JoeJulian I have an answer, let's see if they match... ;)
22:58 kraynor5b karynor5b aka cpetersen4 ... I changed my nick
22:59 kraynor5b I'm running operating systems from the GlusterFS using Ganesha for NFS3, I have a bottleneck in that I am using 1Gb ethernet for all of the connections.
23:01 kraynor5b The way that Gluster works is that it writes to all three replicas simultaneously, right?
23:03 kraynor5b So essentially I need to adjust the layer 2 link between the Gluster bricks to be more robust.
23:03 kraynor5b Adding an 8GB HBA connection between the bricks should do the trick, correct?
23:05 kraynor5b But if the bricks all get written to by the NFS drivers interfacing with GlusterFS, the bricks don't really talk to each other to transfer data, only for integrity and heartbeat, so would I need to enhance the connection running the NFS share?
23:09 DaKnOb joined #gluster
23:11 _DaKnOb_ joined #gluster
23:13 kraynor5b It makes sense to me that when the files are written to the LUN by the NFS driver, the files are transferred to every brick simultaneously and data integrity is kept by the layer 2 link between bricks.  So the connection robustness needs to be on the network used to write to NFS.
23:13 JoeJulian So yes, the gluster client writes to all three replica simultaneously. To maximize network throughput it should be 3 times faster than your storage.
23:14 JoeJulian When you write to the volume via nfs, your client writes across 1 network connection to what is essentially just another gluster client. That client will need the bandwidth, then, to write those packets to the servers.
23:14 haomaiwang joined #gluster
23:16 DaKnOb joined #gluster
23:16 kraynor5b Yes!  Right, Gluster client on Brick1 is used when the NFS baton is held by that server.  Gluster client on brick1 will cascade the files to the other bricks
23:16 JoeJulian correct
23:16 kraynor5b So yes, the connection must be robust on L2 between the bricks.
23:17 JoeJulian +1
23:18 kraynor5b Yes I got a point!!
23:18 kraynor5b ha
23:19 kraynor5b Yeah if the hypervisor is talking to the NFS share and controlling the OS, the throughput is still only required between the bricks because that is where your disk I/O is occurring.  Cool I think I have my head around it.
23:19 kraynor5b :D
23:19 kraynor5b Thank you for your help once again!
23:20 JoeJulian You're welcome as always. :)
23:39 kraynor5b So if the storage is 6Gb SATA, theoretically the maximum throughput we will get from a 6Gb layer 2 connection between bricks is approximately what?  What is the ratio of overhead that the GlusterFS client adds, not thinking of relativity to amount of bricks?
23:39 kraynor5b JoeJulian:  Sorry that was just 1 more question for today...
23:52 JoeJulian kraynor5b: Typically you can expect around 85% of max bandwidth on appended writes to open files with jumbo frames.
23:53 JoeJulian Assuming no other bottlenecks.
23:55 kraynor5b Right, so if I'm getting about 350mbps max throughput on my 1Gb connection, it could just be that I don't have jumbo frames enabled and that it's only at 85% of my maximum throughput which would be 800mbps (considering overhead) on a 1Gb connection.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary