Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:03 EinstCrazy joined #gluster
00:08 renlor joined #gluster
00:16 zerick joined #gluster
00:24 zerick joined #gluster
01:01 7YUAAQCON joined #gluster
01:11 DV joined #gluster
01:16 zhangjn joined #gluster
01:19 nangthang joined #gluster
01:25 zhangjn joined #gluster
01:32 plarsen joined #gluster
01:33 dlambrig joined #gluster
01:41 JesperA joined #gluster
01:49 Lee1092 joined #gluster
01:55 haomaiwa_ joined #gluster
02:13 calavera joined #gluster
02:22 m0zes joined #gluster
02:25 tree333 joined #gluster
02:26 haomaiwang joined #gluster
02:28 haomaiwang joined #gluster
02:30 harish_ joined #gluster
02:33 EinstCrazy joined #gluster
02:34 RameshN joined #gluster
02:36 calavera joined #gluster
02:45 arcolife joined #gluster
02:54 calavera joined #gluster
03:01 haomaiwa_ joined #gluster
03:03 RameshN_ joined #gluster
03:07 nangthang joined #gluster
03:18 bharata-rao joined #gluster
03:19 calavera joined #gluster
03:30 nbalacha joined #gluster
03:33 RameshN_ joined #gluster
03:46 kdhananjay joined #gluster
03:46 kanagaraj joined #gluster
03:50 RameshN__ joined #gluster
03:51 itisravi joined #gluster
03:52 shubhendu joined #gluster
04:01 haomaiwa_ joined #gluster
04:11 ramteid joined #gluster
04:21 nishanth joined #gluster
04:26 atinm joined #gluster
04:39 nehar joined #gluster
04:49 aravindavk joined #gluster
04:57 karthikfff joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 Manikandan joined #gluster
05:02 skoduri joined #gluster
05:06 baojg joined #gluster
05:07 ppai joined #gluster
05:10 RameshN__ joined #gluster
05:11 ndarshan joined #gluster
05:20 Apeksha joined #gluster
05:23 EinstCrazy joined #gluster
05:33 ashiq joined #gluster
05:33 arcolife joined #gluster
05:34 hgowtham joined #gluster
05:42 R0ok_ joined #gluster
05:45 gem joined #gluster
05:46 skoduri joined #gluster
05:51 RameshN_ joined #gluster
05:56 deepakcs joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 EinstCrazy joined #gluster
06:04 aravindavk joined #gluster
06:05 Bhaskarakiran joined #gluster
06:08 vimal joined #gluster
06:10 nishanth joined #gluster
06:15 pppp joined #gluster
06:18 shubhendu joined #gluster
06:19 ahino joined #gluster
06:21 rafi joined #gluster
06:22 karnan joined #gluster
06:35 baojg joined #gluster
06:41 overclk joined #gluster
06:50 atalur joined #gluster
06:53 SOLDIERz joined #gluster
06:59 spalai joined #gluster
07:01 haomaiwa_ joined #gluster
07:05 EinstCrazy joined #gluster
07:09 nehar joined #gluster
07:11 mobaer joined #gluster
07:15 mhulsman joined #gluster
07:15 baojg joined #gluster
07:17 atalur joined #gluster
07:17 Manikandan joined #gluster
07:17 gem joined #gluster
07:24 baojg joined #gluster
07:26 aravindavk joined #gluster
07:29 [Enrico] joined #gluster
07:35 unlaudable joined #gluster
07:37 atalur joined #gluster
07:39 shubhendu joined #gluster
07:39 nishanth joined #gluster
07:41 [Enrico] joined #gluster
07:43 Manikandan joined #gluster
07:46 b0p joined #gluster
07:50 EinstCrazy joined #gluster
07:57 baojg joined #gluster
07:58 harish joined #gluster
07:59 Saravanakmr joined #gluster
08:01 6A4ABTFOI joined #gluster
08:06 EinstCrazy joined #gluster
08:06 overclk joined #gluster
08:07 arcolife joined #gluster
08:11 mbukatov joined #gluster
08:18 b0p1 joined #gluster
08:22 hos7ein joined #gluster
08:23 aravindavk joined #gluster
08:30 ppai joined #gluster
08:30 EinstCrazy joined #gluster
08:32 the-me joined #gluster
08:34 gem joined #gluster
08:36 ivan_rossi joined #gluster
08:40 ivan_rossi left #gluster
08:43 ahino joined #gluster
08:45 fsimonce joined #gluster
08:55 pppp joined #gluster
08:57 ramteid joined #gluster
09:00 jtux joined #gluster
09:00 zhangjn joined #gluster
09:01 haomaiwa_ joined #gluster
09:04 JesperA joined #gluster
09:04 kdhananjay1 joined #gluster
09:07 kdhananjay joined #gluster
09:10 Bhaskarakiran joined #gluster
09:19 EinstCra_ joined #gluster
09:19 gowtham joined #gluster
09:20 mobaer joined #gluster
09:21 Slashman joined #gluster
09:27 Bhaskarakiran joined #gluster
09:29 kdhananjay joined #gluster
09:29 muneerse joined #gluster
09:38 b0p joined #gluster
09:39 mobaer joined #gluster
09:40 Bhaskarakiran_ joined #gluster
09:48 kdhananjay joined #gluster
09:59 pppp joined #gluster
10:01 haomaiwa_ joined #gluster
10:09 b0p1 joined #gluster
10:15 karnan joined #gluster
10:15 Manikandan joined #gluster
10:18 EinstCrazy joined #gluster
10:18 bfm joined #gluster
10:23 kdhananjay joined #gluster
10:26 kdhananjay joined #gluster
10:32 atinm joined #gluster
10:39 kdhananjay1 joined #gluster
10:39 b0p joined #gluster
10:43 gem joined #gluster
10:43 kdhananjay joined #gluster
10:46 harish joined #gluster
10:48 Bhaskarakiran joined #gluster
10:57 fcami joined #gluster
11:01 17WABM39V joined #gluster
11:09 b0p1 joined #gluster
11:13 b0p joined #gluster
11:13 b0p left #gluster
11:17 kdhananjay1 joined #gluster
11:19 aravindavk joined #gluster
11:23 EinstCrazy joined #gluster
11:27 deepakcs joined #gluster
11:30 gem joined #gluster
11:31 atinm joined #gluster
11:41 ilbot3 joined #gluster
11:41 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
11:44 anil joined #gluster
11:44 b0p1 joined #gluster
11:44 aravindavk joined #gluster
11:46 bfoster joined #gluster
11:49 rafi joined #gluster
11:51 aravindavk joined #gluster
11:51 deepakcs joined #gluster
11:52 atalur joined #gluster
12:01 haomaiwa_ joined #gluster
12:04 bluenemo joined #gluster
12:09 scobanx joined #gluster
12:10 scobanx I would like to know if it is possible to reconstruct data from an EC volume without gluster online?
12:12 rwheeler joined #gluster
12:14 b0p joined #gluster
12:20 scobanx Anyone online?
12:27 MessedUpHare joined #gluster
12:28 ppai joined #gluster
12:39 hos7ein_ joined #gluster
12:57 nehar joined #gluster
13:06 mobaer joined #gluster
13:08 ahino1 joined #gluster
13:25 dlambrig joined #gluster
13:28 ppai joined #gluster
13:31 haomaiwa_ joined #gluster
13:34 dlambrig joined #gluster
13:39 B21956 joined #gluster
13:42 nehar joined #gluster
13:45 plarsen joined #gluster
13:50 ira joined #gluster
13:52 rafi1 joined #gluster
13:55 spalai left #gluster
14:00 unclemarc joined #gluster
14:01 18WABV7ST joined #gluster
14:03 dron23 joined #gluster
14:03 enoch joined #gluster
14:03 enoch joined #gluster
14:03 enoch hi all, i have a question
14:03 enoch does gluster works with different filesystems?
14:04 dron23 hello :-)
14:04 enoch i have four HD mounted via NTFS and i want to create one logical volume, is it possible?
14:05 dron23 enoch: as I know, glusterfs works on top of posix fs
14:05 dron23 btw. I would like also ask one question :-)
14:05 overclk joined #gluster
14:06 dron23 which translators are enabled by default on client side?
14:06 dron23 or how can I display all enabled translators on client side...
14:07 julim joined #gluster
14:11 dlambrig joined #gluster
14:12 nbalacha joined #gluster
14:22 Manikandan joined #gluster
14:28 ivan_rossi joined #gluster
14:28 deniszh joined #gluster
14:29 mdavidson joined #gluster
14:38 chirino_m joined #gluster
14:39 dgandhi joined #gluster
14:40 shyam joined #gluster
14:40 klaxa joined #gluster
14:42 Saravanakmr joined #gluster
14:50 rwheeler joined #gluster
14:51 hamiller joined #gluster
14:52 illogik joined #gluster
14:55 illogik joined #gluster
14:57 ahino joined #gluster
14:58 skylar joined #gluster
14:59 ekuric joined #gluster
15:02 luizcpg joined #gluster
15:04 luizcpg Hi, Someone can say when gluster 3.7.7 will be released ?
15:04 luizcpg thanks
15:09 nottc joined #gluster
15:11 illogik_ joined #gluster
15:13 Manikandan joined #gluster
15:13 purpleidea joined #gluster
15:13 purpleidea joined #gluster
15:16 illogik joined #gluster
15:21 plarsen joined #gluster
15:32 baojg joined #gluster
15:36 farhorizon joined #gluster
15:37 shubhendu joined #gluster
15:39 hagarth joined #gluster
15:40 wushudoin joined #gluster
15:41 wushudoin joined #gluster
15:49 neofob joined #gluster
16:05 haomaiwa_ joined #gluster
16:28 mlhess joined #gluster
16:30 spalai joined #gluster
16:33 nickage joined #gluster
16:34 pppp joined #gluster
16:39 rafi joined #gluster
16:42 Manikandan joined #gluster
16:43 gem joined #gluster
16:45 mlhess joined #gluster
16:52 bowhunter joined #gluster
16:54 baojg joined #gluster
17:01 RayTrace_ joined #gluster
17:01 haomaiwa_ joined #gluster
17:11 F2Knight joined #gluster
17:14 MACscr joined #gluster
17:14 MACscr joined #gluster
17:14 MACscr joined #gluster
17:17 spalai left #gluster
17:18 spalai joined #gluster
17:18 calavera joined #gluster
17:22 illogik_ joined #gluster
17:26 nickage should clients be on the same version as server?
17:29 Manikandan joined #gluster
17:31 mhulsman joined #gluster
17:32 calavera joined #gluster
17:33 ivan_rossi left #gluster
17:33 julim joined #gluster
17:33 portante joined #gluster
17:34 kkeithley joined #gluster
17:34 hagarth joined #gluster
17:36 shubhendu_ joined #gluster
17:36 ndk joined #gluster
17:38 bfoster joined #gluster
17:40 raghu joined #gluster
17:48 shubhendu__ joined #gluster
17:52 gem joined #gluster
17:53 rwheeler joined #gluster
17:57 julim_ joined #gluster
17:59 RayTrace_ joined #gluster
17:59 enoch joined #gluster
18:01 calavera joined #gluster
18:01 haomaiwa_ joined #gluster
18:18 dron23 nickage: I believe they should
18:24 baojg joined #gluster
18:27 calavera joined #gluster
18:30 Rapture joined #gluster
18:32 deniszh joined #gluster
18:45 bfm joined #gluster
18:46 EinstCrazy joined #gluster
18:49 shubhendu joined #gluster
18:52 enoch joined #gluster
18:54 spalai joined #gluster
18:55 JoeJulian dron23: You can see how glusterd builds the translator graph by looking in /var/lib/glusterd/vols/$volume_name. The .vol files are the brick and client configurations.
18:55 JoeJulian @later tell luizcpg gluster 3.7.7 will be released when it's ready. Don't rush it, some important bugs are being addressed.
18:55 glusterbot JoeJulian: The operation succeeded.
18:59 dron23 JoeJulian: thanks! I was looking for these settings on client :-)
19:01 haomaiwa_ joined #gluster
19:02 dron23 JoeJulian: I did some smaillfile benchmarks and it looks like "read" test is faster on gluster than on local storage (but only read... the others was much slower) and I am trying to figure out why...
19:02 dron23 smallfile...
19:03 dron23 JoeJulian: probably some performance oriented translators like quick-read or io-cache...
19:04 JoeJulian Using the fuse mount?
19:06 lalatenduM joined #gluster
19:08 dron23 JoeJulian: yep
19:09 JoeJulian Interesting.
19:10 dron23 JoeJulian: hmm, maybe I did it wrong...? I'll doublecheck it...
19:12 JoeJulian I've never benchmarked individual operations. It's possible, just not something I would have anticipated.
19:21 RayTrace_ joined #gluster
19:26 nickage what is better nfs or glusterfs mount?
19:26 JoeJulian Depends on your use case.
19:26 twaddle Does anyone have any tips for tuning Gluster in AWS?
19:26 nickage oh, looks like I found your blog :)
19:26 JoeJulian I generally default to fuse unless I need the kernel cache and am not worried about consistency.
19:27 twaddle I've tested it on T2 and C4 instances, the performance is pretty horrible
19:27 nickage is that you ? https://joejulian.name/blog/nfs-mount-for-glusterfs-gives-better-read-performance-for-small-files/
19:27 glusterbot Title: NFS mount for GlusterFS gives better read performance for small files? (at joejulian.name)
19:27 JoeJulian Yep
19:27 nickage nice
19:29 twaddle It might be our use case, we're atttempting to use Gluster to handle php and assets, about 340k files (16GB) for a wordpress site.
19:30 twaddle A single T2.large is quicker than a pair of the EBS optimised c4.xlarge running gluster
19:33 twaddle is that a scenario where we're better off using nfs JoeJulian?
19:34 dron23 JoeJulian, nickage: nice, I also red this today :-)
19:34 dron23 read...
19:34 JoeJulian I prefer the optimizations I mention at ,,(php)
19:34 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
19:37 mhulsman joined #gluster
19:38 dron23 JoeJulian: if you are interested... I have distribute-replicated volume (2x2=4 bricks on 4 servers). smallfile create performance is 190 files/s, read performance is 3000 files/s. on local storage I get for create 66862 files/s and for read 1100 files/s
19:38 dron23 smallfile_cli.py --top /mnt/vol4/smallfile1 --threads 8 --file-size 4 --files 10000 --response-times Y --operation create
19:38 dron23 smallfile_cli.py --top /mnt/vol4/smallfile1 --threads 8 --file-size 4 --files 10000 --response-times Y --operation read
19:41 nickage what is going to happen if I have two-three clients trying to write the same file at the sa,e time, is there latency time, will there be some conflict or they will be out of sync?
19:43 twaddle JoeJulian: are there any alternatives to apc for php7.0 users? AFAIK APC is officially dead now isn't it?
19:46 JoeJulian To be honest, I haven't touched PHP in a couple years. When I looked at the source and how little regard it has for data integrity, I abandoned any project I was involved in that used it.
19:49 EinstCrazy joined #gluster
19:49 ahino joined #gluster
19:50 enoch joined #gluster
19:51 twaddle Fair enough, so is gluster just a bad fit this kind of application then?
19:53 bowhunter joined #gluster
19:54 baojg joined #gluster
20:01 haomaiwang joined #gluster
20:02 enoch joined #gluster
20:02 ovaistariq joined #gluster
20:04 rwheeler joined #gluster
20:10 mobaer joined #gluster
20:10 JoeJulian I most certainly wouldn't have you put those words in my mouth.
20:11 JoeJulian My answer, as always, is "it depends".
20:11 baojg joined #gluster
20:11 hagarth JoeJulian: did you happen to see an email from Pranith last week?
20:11 JoeJulian If you have no need for clustered storage, you don't need redundancy or resiliency, then it quite possibly may not be a good fit.
20:11 enoch joined #gluster
20:12 JoeJulian I did. Reponded directly and forgot to continue the cc list. I'll forward you my reponse.
20:12 hagarth JoeJulian: cool, thank you!
20:13 JoeJulian Oh, I wouldn't get *that* excited about it. You haven't read it yet. ;)
20:14 hagarth JoeJulian: lol
20:18 twaddle Joejulian: Our two choices are to manage code per-server, using git, with assets in S3, or manage code and assets on Gluster.
20:20 JoeJulian If I were doing it, I would do salt+git+s3 for maximum scaleability with a good load balancer, and clustered redis for session data.
20:21 calavera joined #gluster
20:32 nickage ok, guys I found something interesting, one my client getting "Transport endpoint is not connected" and fails to write a file
20:33 JoeJulian nickage: check 'gluster volume status'. You likely either have a brick down or a network problem.
20:35 sagarhani joined #gluster
20:36 nickage JoeJulian: I was tracing process of gluster client mount of a volume that is healthy and on other client I don't see this issue at all
20:42 nickage JoeJulian: oh, you are right, copied wrong pid
21:00 calavera joined #gluster
21:01 enoch joined #gluster
21:01 haomaiwa_ joined #gluster
21:01 tswartz joined #gluster
21:07 enoch joined #gluster
21:09 EinstCrazy joined #gluster
21:11 Logos01 Howdy. I'm hoping someone can help me figure this out -- I've got a distributed gluster volume setup w/ three brick-servers, set to 2x duplication.  My clients are getting huge numbers of "XDR decoding failed [Invalid argument]" errors.
21:12 Logos01 The clients are mounting fine, but this is happening on file access by a process running as the root user on the client systems.
21:23 twaddle JoeJulian salt?
21:26 baojg joined #gluster
21:28 JoeJulian twaddle: http://saltstack.com/community/
21:29 JoeJulian Logos01: typically that's a version mismatch between server and client.
21:29 Logos01 JoeJulian: They're running 3.7.6-1 and have been since I created the volume.
21:30 Logos01 Servers are el7 and clients are el6 but the RPMs are 3.7.6-1
21:30 Logos01 So it's not a version mismatch -- they're the same.
21:31 JoeJulian hmm
21:32 Logos01 So I'm back to: what does it actually *mean* ?
21:32 JoeJulian Can I see the entire line with that message?
21:32 Logos01 I've got thousands of them. But yeah, just a sec.
21:33 Logos01 http://fpaste.org/314663/37575801/
21:33 glusterbot Title: #314663 Fedora Project Pastebin (at fpaste.org)
21:33 Logos01 I don't see these errors on the server-side, and the clients can create, modify, delete, list, and move files in the gluster mountpoints *perfectly* when I do it myself.
21:42 JoeJulian Pfft, that's nearly useless. That error can occur for several reasons, none of which are reported by a log entry.
21:44 JoeJulian Logos01: That happens on all your clients?
21:44 neofob joined #gluster
21:45 Logos01 JoeJulian: Yeah
21:45 JoeJulian And that log you pasted, is that verbatim, or did you filter for errors?
21:46 cliluw joined #gluster
21:47 Logos01 I grepped ' E '
21:47 JoeJulian Could I get an unfiltered chunk. It's possible that something else happens that results in this error.
21:48 hagarth joined #gluster
21:49 Logos01 Just a sec
21:49 Logos01 http://fpaste.org/314672/53758566/
21:49 glusterbot Title: #314672 Fedora Project Pastebin (at fpaste.org)
21:52 Logos01 I just tried manually invoking "gluster volume heal jbcache_context" and now I'm seeing different behavior on the clients...
21:53 Logos01 Significant spike of messages like "[2016-01-25 21:52:47.597361] W [fuse-bridge.c:2211:fuse_readv_cbk] 0-glusterfs-fuse: 4410247: READ => -1 (Input/output error)
21:53 Logos01 "
21:58 enoch joined #gluster
22:00 JoeJulian Logos01: "Executing operation with some subvolumes unavailable" suggests that you have (a) brick(s) offline.
22:01 haomaiwa_ joined #gluster
22:02 calavera joined #gluster
22:03 Logos01 None of the three are.
22:06 Logos01 All three bricks are up.
22:07 JoeJulian Oh, wait... is this a disburse volume?
22:07 JoeJulian disperse
22:07 JoeJulian Why isn't my spell checking checking... I hate making mistrakes.
22:10 abyss^ joined #gluster
22:11 JoeJulian Logos01: Let's take a look at volume info, please.
22:11 calavera joined #gluster
22:12 monotek joined #gluster
22:12 Logos01 Yeah it's a disbursed volume.
22:14 deniszh joined #gluster
22:14 Logos01 http://fpaste.org/314679/53760070/
22:14 glusterbot Title: #314679 Fedora Project Pastebin (at fpaste.org)
22:17 Logos01 And now I'm getting new errors...
22:17 Logos01 [2016-01-25 22:16:59.771274] E [socket.c:2863:socket_connect] (-->/usr/lib64/libglusterfs.so.0(gf_timer_proc+0x113) [0x7fcc760cd4e3] -->/usr/lib64/libgfrpc.so.0(rpc_clnt_reconnect+0xd9) [0x7fcc75e79ea9] -->/usr/lib64/glusterfs/3.7.6/rpc-transport/socket.so(+0x86ec) [0x7fcc6a9cf6ec] ) 0-socket: invalid argument: this->private [Invalid argument]
22:17 glusterbot Logos01: ('s karma is now -121
22:17 Logos01 ... that was one line.
22:21 Logos01 gluster volume status output: http://fpaste.org/314683/14537604/
22:21 glusterbot Title: #314683 Fedora Project Pastebin (at fpaste.org)
22:21 JoeJulian That says that this->private is a null pointer.
22:22 JoeJulian Which shouldn't happen.
22:25 JoeJulian So the "fix" is to unmount and mount. If we knew how it got into this state, we could look for the bug and see if it was fixed, or file a bug report.
22:25 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
22:29 illogik joined #gluster
22:29 Logos01 I've done that before.
22:30 Logos01 (To no avail that is)
22:31 gildub joined #gluster
22:34 calavera_ joined #gluster
22:37 Logos01 I'm going to try a full unmount of all clients simultaneously and see if that helps.
22:37 Logos01 (This is something I can only do because my production environment is down anyhow. >_<)
22:37 coredump joined #gluster
22:38 JoeJulian Ugh
22:40 JoeJulian hagarth: The this->private pointer in socket.c on the client can't be null due to anything server-side, can it?
22:42 JoeJulian Logos01: If you rotate your logs before mounting, sharing the full log with the gluster-devel mailing list as soon as you start getting an error might be helpful.
22:43 hagarth JoeJulian: right, unless it has not been inited
22:45 Logos01 Those were on the server side.
22:45 JoeJulian Oh, well that's different.
22:46 Logos01 Okay ... I've stopped all three volumes, I'm now rebooting all three brick servers.
22:46 Logos01 Gonna wait until they come back up, clear out the /var/log/glusterfs directory and then start the bricks.
22:46 JoeJulian Sounds like a plan.
22:47 JoeJulian This looks like bug 1272940
22:47 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1272940 high, unspecified, ---, bugs, NEW , Shd can't reconnect after ping-timeout (error in polling loop; invalid argument: this->private)
22:48 JoeJulian Oh, and glustershd is a client, so it's still client-side.
22:49 JoeJulian Heh, gotta love how I'm a level of triage referenced in the bug report. :D
22:51 JoeJulian iirc, he disabled ssl for now.
22:51 Logos01 http://fpaste.org/314689/37622591/
22:51 glusterbot Title: #314689 Fedora Project Pastebin (at fpaste.org)
22:51 Logos01 ...
22:51 Logos01 I do in fact have ssl enabled.
22:51 Logos01 Do you think it's possible that's the problem?
22:52 JoeJulian Well, it's *a* problem. Add yourself to the cc list on bug 1272940
22:52 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1272940 high, unspecified, ---, bugs, NEW , Shd can't reconnect after ping-timeout (error in polling loop; invalid argument: this->private)
22:54 JoeJulian And I don't know what "Mismatching xdata in answers" is, but it looks like the equivalient of an out of sync raid.
22:55 JoeJulian Would be nice if the error message said what it was trying to look up.
22:55 Logos01 JoeJulian: It's apparently happening during heal
22:55 Logos01 (As heals are failing)
22:56 baojg joined #gluster
22:56 JoeJulian As new as EC is, I'd also recommend emailing gluster-devel for that one.
22:56 Logos01 So just to be clear, that other person was able to bypass the problem of not being able to reconnect to the gluster volumes by disabling the use of SSL?
22:56 JoeJulian Correct
22:57 Logos01 Welp, I was planning on setting up a tincd cluster for these things anyhow I guess...
22:57 Logos01 That's gonna play havoc with the dns resolution though isn't it.
22:58 JoeJulian I've never played with tincd. Not sure. Maybe use mdns?
22:59 JoeJulian systemd-resolved
23:00 TimRice joined #gluster
23:01 haomaiwang joined #gluster
23:02 Logos01 JoeJulian: tincd is an "sslvpn"
23:02 Logos01 Does mesh-based connections.
23:02 JoeJulian Yeah, I googled it right away. :)
23:02 Logos01 The notion that makes tincd a little better than other VPN engines is that it's mesh-based, i.e.; point-to-point with multiple points.
23:03 Logos01 So you don't have to worry about a central server hosting all traffic.
23:03 Logos01 Unfortunately my client machines are all CentOS 6 and my servers are CentOS 7
23:03 JoeJulian Yeah, looks nifty. I can think of past use cases I had where it would have been nice to have.
23:03 Logos01 Which ... is a different issue.
23:04 Logos01 I don't think I can recover these volumes right now though...
23:04 JoeJulian Well, you can still use systemd-resolved on the centos 7 boxes to provide mdns. The centos6 boxes would have to use a separate mdns service, but it's all the same protocol.
23:04 TimRice hey... can anyone offer any advice? :) im trying to create a cloned volume from a snapshot, and re-use an old volume name.. basically creating a clone of production data for staging environment... and the snapshot clone fails with: "snapshot clone: failed: Commit failed on localhost. Please check log file for details."
23:04 Logos01 The major problem is that tincd isn't compatible w/ el6.
23:05 TimRice and the log says "[MSGID: 106098] [glusterd-snapshot-utils.c:2700:glusterd_mount_lvm_snapshot] 0-management: mounting the snapshot logical device  /dev/vg02_bricks/clone_blogs_dir_ext4_stage_0 failed (error: Bad file descriptor)"
23:05 JoeJulian Man.... He starts off with asking for "any" advice, and I was all primed to give him some, then he goes and qualifies it.
23:05 TimRice if i clone to another volume name that has not previously been used, it works fine
23:05 Logos01 What I really don't get is -- if the volumes are so hosed that they can't be healed, then why I can I still mount them and interact normally?
23:06 JoeJulian Logos01: good question. And I'm not entirely sure that they can't be healed. EC's new.
23:07 JoeJulian TimRice: This "old" volume name is no longer in use?
23:07 TimRice correct, its been deleted
23:08 Rapture joined #gluster
23:09 calavera joined #gluster
23:12 Logos01 JoeJulian: It's curious though because when it's just lil' ol' me interacting with the volume, I can mount and unmount, read and write, and I don't see any problems.
23:12 JoeJulian TimRice: Well glusterd-snapshot-utils.c just runs the "mount" command. Debug logging will tell you the exact command that's failing. That might give you a better clue as to why it's failing.
23:13 Logos01 (the logs show all sorts of things but that's different)
23:13 TimRice alright, ill have a look at the debug logging, thanks!
23:13 abyss^ joined #gluster
23:16 shyam left #gluster
23:26 Rapture joined #gluster
23:31 EinstCrazy joined #gluster
23:39 mbukatov joined #gluster
23:41 moss joined #gluster
23:42 ovaistariq joined #gluster
23:43 twaddle joined #gluster
23:47 JoeJulian Just in case anybody's hungry and wants to help my daughter: https://digitalcookie.girlscouts.org/scout/tricia836265
23:47 glusterbot Title: Girl Landing Page | Girl Scouts Site US (at digitalcookie.girlscouts.org)
23:47 moss lol
23:48 moss wow that is pretty amazing
23:48 moss JoeJulian: I'm buying some thin mints
23:48 JoeJulian Woo! :)
23:49 moss JoeJulian: what are your favorites?
23:49 JoeJulian thin mints.
23:49 ovaistariq joined #gluster
23:49 JoeJulian In the freezer.
23:51 moss boom
23:51 moss order placed
23:51 moss 4 boxes :P
23:51 JoeJulian Thanks! :)
23:51 moss You're welcome.
23:52 moss Thank you for always helping me when I need it :P
23:52 JoeJulian Always happy to try.
23:57 illogik joined #gluster
23:57 EinstCrazy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary