Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 JoeJulian semiosis, systemonkey: I think I've duplicated the bug. I cheated though so I'll have to try again without cheating.
00:12 jag3773 joined #gluster
00:17 gdubreui joined #gluster
00:21 systemonkey JoeJulian: that's good news in a interesting way.
00:46 systemonkey JoeJulian: Thanks again for the valuable insights into my situation.
01:21 plarsen joined #gluster
01:25 jmarley joined #gluster
01:31 itisravi_afk joined #gluster
01:40 glusterbot New news from newglusterbugs: [Bug 1093217] [RFE] Gluster module (purpleidea) to support HA installations using Pacemaker <https://bugzilla.redhat.com/show_bug.cgi?id=1093217>
01:41 crashmag joined #gluster
01:41 fidevo joined #gluster
01:42 hagarth joined #gluster
01:47 _BryanHM_ joined #gluster
01:48 jskinner joined #gluster
01:52 theron joined #gluster
01:59 vpshastry joined #gluster
02:02 badone_ joined #gluster
02:10 asku joined #gluster
02:11 itisravi_afk joined #gluster
02:13 jag3773 joined #gluster
02:19 asku joined #gluster
02:35 itisravi_afk joined #gluster
02:41 ira joined #gluster
02:51 haomaiw__ joined #gluster
02:51 vpshastry joined #gluster
02:54 lalatenduM joined #gluster
02:57 davinder joined #gluster
03:03 itisravi_afk joined #gluster
03:05 RameshN joined #gluster
03:34 itisravi_afk joined #gluster
03:36 Durzo JoeJulian, are you still around?
03:39 Durzo it seems you cannot downgrade 3.5.0 to 3.4.3
03:39 Durzo glusterd fails to start the management volume and exits
03:44 aravindavk joined #gluster
03:57 jag3773 joined #gluster
04:05 ira joined #gluster
04:12 sprachgenerator joined #gluster
04:27 Guest10117 joined #gluster
05:05 davinder joined #gluster
05:13 badone_ joined #gluster
05:39 ricky-ti1 joined #gluster
06:24 badone_ joined #gluster
06:25 nishanth joined #gluster
06:25 nthomas joined #gluster
06:28 RameshN joined #gluster
06:37 RameshN joined #gluster
06:39 ekuric joined #gluster
06:39 Andy5_ joined #gluster
06:39 badone_ joined #gluster
06:41 AaronGr joined #gluster
06:50 ThatGraemeGuy joined #gluster
06:58 edward2 joined #gluster
07:07 ctria joined #gluster
07:22 ricky-ticky1 joined #gluster
07:33 Andy5 joined #gluster
07:57 vpshastry left #gluster
08:51 saravanakumar joined #gluster
09:36 Chewi joined #gluster
09:46 itisravi_afk joined #gluster
09:59 qdk joined #gluster
10:05 RameshN joined #gluster
10:09 RameshN joined #gluster
10:15 klaas joined #gluster
10:20 RameshN joined #gluster
10:21 tryggvil joined #gluster
10:25 RameshN joined #gluster
10:25 rjoseph joined #gluster
10:34 kkeithley1 joined #gluster
10:36 RameshN joined #gluster
10:39 RameshN joined #gluster
10:42 RameshN joined #gluster
10:42 glusterbot New news from newglusterbugs: [Bug 1039291] glusterfs-libs-3.5.0-0.1.qa3.fc21.x86_64.rpm requires rsyslog-mmjsonparse; this brings in rsyslog, ... <https://bugzilla.redhat.com/show_bug.cgi?id=1039291>
10:45 RameshN joined #gluster
10:45 ekuric joined #gluster
10:46 ira joined #gluster
10:47 RameshN joined #gluster
10:49 ekuric joined #gluster
11:03 gdubreui joined #gluster
11:24 tryggvil joined #gluster
11:26 last joined #gluster
11:28 last left #gluster
11:30 TvL2386 joined #gluster
11:31 Norky joined #gluster
11:43 glusterbot New news from newglusterbugs: [Bug 1093324] File creation fails on the NFS mount point while adding a brick to the same volume <https://bugzilla.redhat.com/show_bug.cgi?id=1093324>
11:52 cvdyoung joined #gluster
12:00 P0w3r3d joined #gluster
12:02 tdasilva joined #gluster
12:10 edward2 joined #gluster
12:12 chirino joined #gluster
12:17 TvL2386 joined #gluster
12:18 jmarley joined #gluster
12:18 jmarley joined #gluster
12:19 itisravi_afk joined #gluster
12:23 Debolaz2_ joined #gluster
12:25 vpshastry joined #gluster
12:36 B21956 joined #gluster
12:39 itisravi_afk joined #gluster
12:41 sroy joined #gluster
12:51 ron-slc joined #gluster
13:07 japuzzo joined #gluster
13:09 plarsen joined #gluster
13:12 coredump joined #gluster
13:13 jobewan joined #gluster
13:13 Andy5_ joined #gluster
13:19 theron joined #gluster
13:37 nishanth joined #gluster
13:48 LoudNoises joined #gluster
13:49 mjsmith2 joined #gluster
13:51 DV joined #gluster
13:53 dbruhn joined #gluster
14:00 theron joined #gluster
14:08 MrAbaddon joined #gluster
14:15 jag3773 joined #gluster
14:19 mjsmith2 joined #gluster
14:27 failshell joined #gluster
14:35 rwheeler joined #gluster
14:47 RameshN joined #gluster
14:48 siel joined #gluster
14:49 coredump joined #gluster
14:50 RameshN joined #gluster
14:51 RameshN joined #gluster
14:56 [o__o] joined #gluster
14:57 ndk joined #gluster
15:06 gmcwhistler joined #gluster
15:06 theron joined #gluster
15:13 RameshN joined #gluster
15:19 daMaestro joined #gluster
15:23 kaptk2 joined #gluster
15:35 lmickh joined #gluster
15:36 wushudoin joined #gluster
15:37 scuttle_ joined #gluster
15:53 jmarley joined #gluster
15:53 jmarley joined #gluster
16:05 lyang0 joined #gluster
16:08 sadbox Is there a reccomended way of setting up a VIP for the NFS shares?
16:09 zaitcev joined #gluster
16:09 _Bryan_ joined #gluster
16:10 ndevos sadbox: I use pacemaker, but you can also use ctdb
16:12 jag3773 joined #gluster
16:15 Mo___ joined #gluster
16:25 mjsmith2 joined #gluster
16:25 Guest10117 joined #gluster
16:26 diegows joined #gluster
16:32 jmarley joined #gluster
16:32 jmarley joined #gluster
16:34 ctria joined #gluster
16:37 LessSeen joined #gluster
16:40 jbd1 ndevos: So i didn't succeed at getting mod_proxy_glusterfs to work
16:40 ndevos jbd1: hmm, any particular errors?
16:41 jbd1 ndevos: the last thing in the apache error log was [Tue Apr 29 12:36:16 2014] [debug] mod_proxy_gluster.c(598): [client 127.0.0.1] connecting gluster://192.168.56.51/UDS/testfile.jpg to 192.168.56.51:0 and then it hangs indefinitely
16:41 ctria joined #gluster
16:41 jbd1 ndevos: should I be specifying a port?
16:41 SFLimey joined #gluster
16:42 ndevos jbd1: no, port 0 picks the default of 24007 (well, according to the libgfapi docs)
16:43 jbd1 ndevos: I can run tcpdumps and strace and such if you like
16:43 ndevos jbd1: did you enable insecure access to glusterd and the bricks?
16:43 jbd1 ndevos: yes, per the docs
16:44 ndevos jbd1: and stopped/started the volume too?
16:44 jbd1 ndevos: I tried it without the insecure stuff and I got a proper error, so I don't think it's that
16:44 ndevos ah, ok
16:44 jbd1 ndevos: rebooted the whole cluster (since this is in the lab)
16:45 ndevos jbd1: yeah, a tcpdump would work, we can then see if the module receives the .vol file and contacts the bricks, or where it starts to fail
16:45 jbd1 ndevos: ok, I'll get it set up real quick
16:46 sprachgenerator joined #gluster
16:47 ndevos jbd1: something like service httpd stop ; tcpdump -s0 -i any -w /tmp/mod_proxy_gluster.pcap 'tcp and (port 24007 or portrange 49150-50000)' & sleep 3 ; service httpd start
16:48 ndevos (thats not a tested command)
16:48 ndevos and, 'service' might not be a command on ubuntu, so you'll need to replace that
16:53 semiosis :O
16:54 semiosis service foo {start|stop} works great on debians
16:54 semiosis including ubuntu
16:54 jbd1 ndevos: I did something similar, have a dump file. I'm about to put the ascii version on a paste site
16:54 jbd1 @paste
16:54 glusterbot jbd1: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
16:57 rotbeard joined #gluster
16:57 jbd1 ndevos: I started httpd, then started the dump, then ran a request (which hung), then stopped the dump.  See http://paste.ubuntu.com/7374218/
16:57 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:57 gmcwhistler joined #gluster
16:59 jbd1 ndevos: more detail here: http://paste.ubuntu.com/7374230/
16:59 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:00 ndevos jbd1: can you pass that through tshark maybe?
17:01 jbd1 ndevos: sure
17:01 ndevos jbd1: like 'tshark -r $pcap -2 -Y rpc'
17:02 ndevos jbd1: at least the apache contacts the bricks, it's just not clear what its doing, tshark is like wireshark an can inspect the contents
17:03 lalatenduM joined #gluster
17:03 jbd1 ndevos: I understand.  I'm now sorting what you want with -2 as my tshark doesn't have it
17:04 jbd1 I'm running tshark 1.6.7
17:04 ndevos jbd1: oh, that wont work, it needs to be 1.8 or newer for the gluster traffic :-/
17:05 jbd1 k, I'll upgrade it
17:05 ndevos jbd1: can you pass that .pcap on to me, either somewhere I can download it it, or email? (do gzip it first)
17:07 asku joined #gluster
17:07 * jbd1 tries out /dcc send
17:07 jbd1 not sure if it will work
17:08 jbd1 I haven't /dcc send anything since 1993 :)
17:08 Licenser joined #gluster
17:09 LessSeen_ joined #gluster
17:10 jbd1 ndevos: emailed it to you
17:11 ndevos jbd1: thanks, checking now
17:12 kmai007 does anybody know why gluster is trying to lstat the contents inside of a file?  http://fpaste.org/98440/64267139/
17:12 glusterbot Title: #98440 Fedora Project Pastebin (at fpaste.org)
17:12 kmai007 i'm not understanding why gluster is wanting to do this operation
17:12 ndevos jbd1: where did you send them to?
17:13 jbd1 ndevos: I emailed it to your redhat.com account
17:14 ndevos jbd1: hmm, must still be in flight somewhere there...
17:14 jbd1 ndevos: I also sent a dcc send request here on IRC, but it looks like you didn't pick that up
17:14 coredump joined #gluster
17:14 jbd1 ndevos: the pcap is only 26k so I didn't bother gzipping it
17:14 ndevos jbd1: dcc normally does not work... I'll be in my inbox soon
17:15 jbd1 ndevos: explains why I haven't used it in 20 years
17:15 ndevos jbd1: I'll have dinner now, maybe I'll have a look at it later, otherwise I'll respond to your email tomorrow
17:15 jbd1 ndevos: sure, no rush
17:15 ndevos cool, cya!
17:17 sks joined #gluster
17:25 jbd1 it would be cool to have command-line libgfapi tools-- like gfapicp, for example, would pull a file from a volume via gfapi or put a file on the volume via gfapi
17:29 jbd1 kmai007: do you have a tool that is trying to open the contents of the file as a filename?  That's what it looks like, at least
17:30 mjsmith2 joined #gluster
17:31 gmcwhistler joined #gluster
17:34 kmai007 jbd1: its a coldfusion app. i don't know where to go from the brick logs http://fpaste.org/98446/98965616/
17:34 glusterbot Title: #98446 Fedora Project Pastebin (at fpaste.org)
17:36 ctria joined #gluster
17:37 jbd1 kmai007: I'd recommend looking at the file open calls there.  I don't know coldfusion but it looks like there is something that is attempting to load the file wrong, e.g. data = read_file(/path/to/filename); stuff = read_file(/path/to/data)
17:37 jbd1 kmai007: that second call would of course cause the error you're seeing
17:38 kmai007 will do , thanks jbd1
17:44 glusterbot New news from newglusterbugs: [Bug 1088589] Failure in gf_log_init reopening stderr <https://bugzilla.redhat.com/show_bug.cgi?id=1088589>
17:45 Matthaeus joined #gluster
17:49 faizan_ joined #gluster
17:50 faizan joined #gluster
17:55 faizan joined #gluster
17:55 nwl joined #gluster
17:55 * nwl waves
18:03 LessSeen joined #gluster
18:06 sjusthome joined #gluster
18:09 theron joined #gluster
18:16 nwl left #gluster
18:23 jbd1 woo hoo! got mod_proxy_gluster working
18:26 ctria joined #gluster
18:28 ctria joined #gluster
18:29 ndk joined #gluster
18:36 coredump joined #gluster
18:36 ctria joined #gluster
18:38 ctria joined #gluster
18:42 ctria joined #gluster
18:42 ctria joined #gluster
18:44 ctria joined #gluster
18:45 wushudoin left #gluster
18:50 Matthaeus1 joined #gluster
18:54 ctria joined #gluster
18:56 MacWinner joined #gluster
18:56 ctria joined #gluster
18:58 ctria joined #gluster
19:00 tg2 joined #gluster
19:04 jcsp joined #gluster
19:14 sprachgenerator joined #gluster
19:15 dr_bob joined #gluster
19:19 jag3773 joined #gluster
19:20 B21956 joined #gluster
19:28 jobewan anyone here using gluster with vmware to host their vmdk files?
19:39 semiosis iirc some people have used nfs mounts for vmw
19:47 ctria joined #gluster
19:49 lpabon joined #gluster
19:55 ctrianta joined #gluster
19:56 jobewan yea, and I'm looking to move my vmware infrastructure away from drbd/iscsi and utilize gluster/nfs.  I was wondering if here has input is all
19:56 jobewan *someone
19:57 ctria joined #gluster
20:01 rwheeler joined #gluster
20:01 ctria joined #gluster
20:03 ctria joined #gluster
20:14 ctria joined #gluster
20:14 dbruhn jobewan, there are several guys running it using NFS. Same advice I give everyone with that one, test it, your milage may vary
20:15 wushudoin joined #gluster
20:19 ctria joined #gluster
20:23 ctria joined #gluster
20:25 ctria joined #gluster
20:27 Matthaeus joined #gluster
20:42 badone_ joined #gluster
21:14 jbrooks joined #gluster
21:22 kaptk2 joined #gluster
21:28 Mneumonik joined #gluster
21:37 Mneumonik joined #gluster
21:41 jbrooks joined #gluster
21:42 diegows joined #gluster
21:42 sroy_ joined #gluster
21:46 Mneumonik joined #gluster
21:49 kmai007 joined #gluster
21:56 Mneumonik joined #gluster
22:03 Mneumonik joined #gluster
22:04 ricky-ticky joined #gluster
22:09 sprachgenerator joined #gluster
22:28 tryggvil joined #gluster
22:45 MeatMuppet joined #gluster
22:48 vata joined #gluster
22:54 gdubreui joined #gluster
23:05 MacWinner joined #gluster
23:10 MrAbaddon joined #gluster
23:23 fidevo joined #gluster
23:30 gdubreui joined #gluster
23:48 badone_ joined #gluster
23:49 primechuck joined #gluster
23:50 primechuck joined #gluster
23:52 primechuck joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary