Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 halfinhalfout joined #gluster
00:22 capri joined #gluster
00:32 T3 joined #gluster
00:37 capri joined #gluster
00:38 uebera|| joined #gluster
00:46 harish_ joined #gluster
01:10 halfinhalfout joined #gluster
01:16 lexi2 joined #gluster
01:22 T3 joined #gluster
01:28 badone_ joined #gluster
01:30 wkf joined #gluster
01:32 halfinhalfout joined #gluster
01:39 hchiramm joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 side_control joined #gluster
02:04 tg2 semiosis, any particular reason 3.6.2 isn't avail for precise?
02:04 tg2 any issues compiling it from source on precise?
02:06 gildub joined #gluster
02:29 nangthang joined #gluster
02:30 siel joined #gluster
02:30 siel joined #gluster
02:44 glusterbot News from newglusterbugs: [Bug 1210137] qcow2 image creation using qemu-img hits segmentation fault <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210137>
02:51 DV_ joined #gluster
03:02 maveric_amitc_ joined #gluster
03:06 lalatenduM joined #gluster
03:10 bharata-rao joined #gluster
03:11 rafi joined #gluster
03:15 glusterbot News from resolvedglusterbugs: [Bug 1210029] Error in QEMU logs of VM, while using QEMU's native driver for glusterfs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210029>
03:17 DV joined #gluster
03:18 kevein joined #gluster
03:18 prg3 joined #gluster
03:19 kdhananjay joined #gluster
03:28 hchiramm joined #gluster
03:51 kanagaraj joined #gluster
03:59 kumar joined #gluster
04:05 nbalacha joined #gluster
04:06 atinmu joined #gluster
04:06 meghanam joined #gluster
04:06 smohan joined #gluster
04:08 ppai joined #gluster
04:12 rjoseph joined #gluster
04:20 RameshN joined #gluster
04:20 kkeithley1 joined #gluster
04:30 poornimag joined #gluster
04:35 soumya joined #gluster
04:37 jiffin joined #gluster
04:41 Bhaskarakiran joined #gluster
04:42 rafi joined #gluster
04:46 ku joined #gluster
04:46 ku hi all
04:46 kotreshhr joined #gluster
04:48 pppp joined #gluster
04:51 ku aloha?
04:52 ku i have a problem with glusterfs that causing permission denied intermittent
04:52 ku someday it works whole day, someday it getting denied error for few min
04:53 halfinhalfout joined #gluster
04:55 misc nothing in log ?
04:57 ppai joined #gluster
05:04 nishanth joined #gluster
05:08 gem joined #gluster
05:09 lalatenduM joined #gluster
05:10 schandra joined #gluster
05:13 ndarshan joined #gluster
05:13 T3 joined #gluster
05:14 Anjana joined #gluster
05:15 deepakcs joined #gluster
05:22 dusmant joined #gluster
05:22 nishanth joined #gluster
05:24 raghu joined #gluster
05:25 vikumar joined #gluster
05:28 ku [2015-04-09 04:07:28.617602] W [nfs3-helpers.c:3470:nfs3_log_newfh_res] 0-nfs-nfsv3: XID: 5d83581a, LOOKUP: NFS: 13(Permission denied), POSIX: 13(Permission denied), FH: exportid 00000000-0000-0000-0000-000000000000, gfid 00000000-0000-0000-0000-000000000000
05:28 ku 2015-04-09 04:07:28.617507] W [client-rpc-fops.c:2766:client3_3_lookup_cbk] 0-umShare-client-0: remote operation failed: Permission denied. Path: /config (00000000-0000-0000-0000-000000000000) [2015-04-09 04:07:28.617577] W [client-rpc-fops.c:2766:client3_3_lookup_cbk] 0-umShare-client-1: remote operation failed: Permission denied. Path: /config (00000000-0000-0000-0000-000000000000)
05:29 hchiramm_ joined #gluster
05:29 Anjana1 joined #gluster
05:30 lalatenduM joined #gluster
05:30 jiffin1 joined #gluster
05:32 pppp joined #gluster
05:36 Manikandan joined #gluster
05:36 Philambdo joined #gluster
05:40 kotreshhr1 joined #gluster
05:44 soumya joined #gluster
05:44 kkeithley1 joined #gluster
05:44 ppai joined #gluster
05:45 anil joined #gluster
05:45 karnan joined #gluster
05:47 atinmu joined #gluster
05:47 ndarshan joined #gluster
05:49 nbalacha joined #gluster
05:52 poornimag joined #gluster
06:06 hagarth joined #gluster
06:07 bala joined #gluster
06:10 atalur joined #gluster
06:10 kdhananjay joined #gluster
06:15 karnan_ joined #gluster
06:15 T3 joined #gluster
06:27 Bhaskarakiran joined #gluster
06:30 overclk joined #gluster
06:32 kdhananjay joined #gluster
06:37 kotreshhr joined #gluster
06:39 jiffin joined #gluster
06:39 spandit joined #gluster
06:39 mator joined #gluster
06:48 jtux joined #gluster
06:48 ricky-ticky joined #gluster
06:51 ricky-ticky2 joined #gluster
06:52 ppai joined #gluster
06:52 atinmu joined #gluster
06:52 lexi2 joined #gluster
06:53 stickyboy joined #gluster
06:53 DV_ joined #gluster
06:54 soumya joined #gluster
06:54 nbalacha joined #gluster
06:54 ndarshan joined #gluster
06:55 poornimag joined #gluster
06:57 hellomichibye joined #gluster
06:58 hellomichibye hi. I tried to gluster peer probe yesterday and as a result glusterd crashed. I attach the log entry: https://gist.github.com/micha​elwittig/a9e9aabccbe131e0c43b
06:58 hellomichibye any ideas what could be the cause for that?
06:58 hellomichibye now it is working again :)
06:58 kkeithley1 joined #gluster
07:01 nangthang joined #gluster
07:03 atinmu hellomichibye, Could you tell me which version of gluster r u using?
07:04 hellomichibye 2.6.3
07:04 atinmu 2.6.3??
07:04 hellomichibye 3.6.2
07:04 hellomichibye sry :D
07:05 hellomichibye gluster --version
07:05 hellomichibye glusterfs 3.6.2 built on Jan 22 2015 12:58:11
07:05 atinmu hellomichibye, could u raise a bug for the same collecting sosreport with -a option?
07:06 atinmu hellomichibye, we would need all the logs to get to the root cause
07:06 karnan joined #gluster
07:07 atinmu hellomichibye, I saw the backtrace it did crash in peer prove
07:07 hellomichibye okay. the full logs are no longer available. i terminated the aws instances in the meantime.
07:07 hellomichibye if you talk about logs you mean /var/log/glusterfs/etc-glusterfs-glusterd.vol.log ?
07:07 [Enrico] joined #gluster
07:08 hellomichibye so I can save the file the next time this happens
07:15 glusterbot News from newglusterbugs: [Bug 1210185] [RFE- SNAPSHOT] : Provide user the option to list the failed jobs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210185>
07:15 glusterbot News from newglusterbugs: [Bug 1210182] Brick start fails, if source is compiled with disable-tiering. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210182>
07:16 T3 joined #gluster
07:16 Bhaskarakiran_ joined #gluster
07:33 fsimonce joined #gluster
07:33 Anjana joined #gluster
07:34 Slashman joined #gluster
07:40 brianw using ubuntu 14.04 on (2) physical machines to host my glusterfs backup & lxc host for samba ad/dc containers & glusterfs/ctdb/samba DFS server containers. Each physical machine can run the network without the other. When the other comes back online, all is synced...
07:40 hellomichibye joined #gluster
07:43 anrao joined #gluster
07:45 glusterbot News from newglusterbugs: [Bug 1210193] Commands hanging on the client post recovery of failed bricks <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210193>
07:45 glusterbot News from newglusterbugs: [Bug 1210204] [SNAPSHOT] - Unable to delete scheduled jobs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210204>
07:46 glusterbot News from newglusterbugs: [Bug 1210205] 3.4.7 Repo not functional; repomod.xml not found!! <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210205>
07:55 jxfx left #gluster
07:56 DV joined #gluster
08:08 liquidat joined #gluster
08:08 soumya joined #gluster
08:12 poornimag joined #gluster
08:16 ktosiek joined #gluster
08:17 T3 joined #gluster
08:20 harish_ joined #gluster
08:26 purpleidea fubada: you'll need to patch gluster::mount to support the file attributes as parameters (same as the file object with the same defaults) ... should be a straightforward patch. i'll review when ready. they should default to undef if not specified...
08:30 Norky joined #gluster
08:31 jermudgeon joined #gluster
08:34 aravindavk joined #gluster
08:36 ppai joined #gluster
08:39 smohan joined #gluster
08:41 Anjana joined #gluster
08:48 semoule joined #gluster
08:48 poornimag joined #gluster
08:52 atinmu hellomichibye, yes
09:02 Anjana1 joined #gluster
09:09 nbalacha joined #gluster
09:16 glusterbot News from resolvedglusterbugs: [Bug 1100262] info file missing from /var/lib/glusterd/vols/<vol-name>. Causes crash <https://bugzilla.redhat.co​m/show_bug.cgi?id=1100262>
09:17 pcaruana joined #gluster
09:18 T3 joined #gluster
09:18 kdhananjay joined #gluster
09:18 kshlm joined #gluster
09:18 kshlm joined #gluster
09:21 twisted` joined #gluster
09:21 twisted` hey, is it required (I can't find a 'must' for it, just everything points at it) to have a separate volume/disk for the Gluster brick?
09:22 twisted` because all servers I got are delivered with a software raid without LVM, so I could spend some time splitting it up but, if possible I'd avoid it and just use a directory.
09:26 Leildin I think you can force it to use the system's disk but it isn't recommended at all :/
09:27 _shaps_ joined #gluster
09:28 twisted` LVM? is that an option?
09:29 rjoseph twisted`: If you want to make use of features like snapshot LVM is a must
09:30 twisted` hmm then I'll try to setup the server again but with LVM, I'll see how that goes
09:30 rjoseph twisted`: and each brick should be on independent LV.
09:30 twisted` fun times ahead
09:30 twisted` LV or VG?
09:30 Leildin it's worth it in the long run to have separate system and bricks
09:31 rjoseph you need to use thin provisioning for LV
09:32 rjoseph Each brick on an independent LV. One or more LV can be there in VG
09:41 hellomichibye joined #gluster
09:44 R0ok_ joined #gluster
09:48 nbalacha joined #gluster
09:48 dusmant joined #gluster
09:53 anrao joined #gluster
10:06 twisted` I'll have to see if I can apply LVM after the fact without destroying the system
10:06 twisted` not sure why by default it's not always with LVM actually
10:11 ninkotech_ joined #gluster
10:13 Anjana joined #gluster
10:16 glusterbot News from newglusterbugs: [Bug 1210256] gluster volume info --xml gives back incorrect typrStr in xml <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210256>
10:17 maveric_amitc_ joined #gluster
10:18 ctria joined #gluster
10:18 T3 joined #gluster
10:22 Anjana joined #gluster
10:32 gildub joined #gluster
10:32 ppai joined #gluster
10:35 ira joined #gluster
10:35 FrankPan joined #gluster
10:51 aravindavk joined #gluster
10:52 harish_ joined #gluster
10:53 schandra joined #gluster
10:53 _shaps_ joined #gluster
10:57 ericm joined #gluster
11:05 nbalacha joined #gluster
11:07 dusmant joined #gluster
11:16 glusterbot News from resolvedglusterbugs: [Bug 826021] Geo-rep ip based access control is broken. <https://bugzilla.redhat.com/show_bug.cgi?id=826021>
11:16 glusterbot News from resolvedglusterbugs: [Bug 820428] [RFE] Geo-replication is not automatically restarted on remaining Masters <https://bugzilla.redhat.com/show_bug.cgi?id=820428>
11:17 hellomichibye joined #gluster
11:17 kumar joined #gluster
11:19 T3 joined #gluster
11:27 Pupeno joined #gluster
11:27 LebedevRI joined #gluster
11:28 ppai joined #gluster
11:30 soumya joined #gluster
11:36 _Bryan_ joined #gluster
11:46 glusterbot News from resolvedglusterbugs: [Bug 1024465] Dist-geo-rep: Crawling + processing for 14 million pre-existing files take very long time <https://bugzilla.redhat.co​m/show_bug.cgi?id=1024465>
11:47 hchiramm_ joined #gluster
11:49 mator joined #gluster
11:52 soumya joined #gluster
11:55 Debloper joined #gluster
12:03 deniszh joined #gluster
12:05 liquidat joined #gluster
12:12 nshaikh joined #gluster
12:12 anil joined #gluster
12:13 ira joined #gluster
12:15 schandra joined #gluster
12:16 Anjana joined #gluster
12:19 soumya joined #gluster
12:20 gem joined #gluster
12:20 T3 joined #gluster
12:21 hellomichibye joined #gluster
12:22 kanagaraj joined #gluster
12:24 pdrakeweb joined #gluster
12:30 hagarth joined #gluster
12:35 Gill joined #gluster
12:38 rwheeler joined #gluster
12:41 Bhaskarakiran joined #gluster
12:42 Gill left #gluster
12:46 ppai joined #gluster
12:53 Philambdo joined #gluster
12:55 wkf joined #gluster
12:57 B21956 joined #gluster
12:58 chirino joined #gluster
12:59 o5k_ joined #gluster
13:01 halfinhalfout joined #gluster
13:02 nangthang joined #gluster
13:03 bennyturns joined #gluster
13:07 xiu joined #gluster
13:13 Gill joined #gluster
13:16 ppai joined #gluster
13:16 ppai joined #gluster
13:17 glusterbot News from newglusterbugs: [Bug 1207547] BitRot :- If bitrot is not enabled for given volume then scrubber should not crawl bricks of that volume and should not update vol file for that volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1207547>
13:17 glusterbot News from resolvedglusterbugs: [Bug 1210205] 3.4.7 Repo not functional; repomod.xml not found!! <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210205>
13:20 hamiller joined #gluster
13:21 T3 joined #gluster
13:24 hellomichibye joined #gluster
13:29 georgeh-LT2 joined #gluster
13:31 nciardo joined #gluster
13:35 kotreshhr left #gluster
13:37 schandra joined #gluster
13:38 dgandhi joined #gluster
13:39 twisted` hey, when I try to create a lockfile on a gluster mounted volume I get: lockfile creation failed: Value too large for defined data type
13:39 twisted` lockfile-create --retry 20 filename
13:39 twisted` is what I use, on any other volume it works except the gluster mounted one
13:40 rafi1 joined #gluster
13:42 bennyturns joined #gluster
13:43 bennyturns joined #gluster
13:47 glusterbot News from newglusterbugs: [Bug 1210338] file copy operation fails on nfs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210338>
13:47 glusterbot News from resolvedglusterbugs: [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
13:48 hchiramm_ joined #gluster
13:50 plarsen joined #gluster
13:56 halfinhalfout joined #gluster
13:57 xiu joined #gluster
14:02 ghenry joined #gluster
14:07 hagarth joined #gluster
14:12 lalatenduM joined #gluster
14:14 DV_ joined #gluster
14:16 nciardo joined #gluster
14:17 cicero semiosis: sorry to bother you again -- if upgrading from 3.3 is currently not an option for me, is there a way i could rebuild the 3.3 precise deb you had on your ppa? :\
14:18 cicero semiosis: we really are looking to upgrade but right now we're just trying to mitigate the risk of losing a brick and then being SOL :(
14:21 T3 joined #gluster
14:27 nciardo hey guys... can i use 2 servers (2 TB of storage for each server) to build a 4TB-"virtual disk"
14:27 nciardo ^
14:27 nciardo ?
14:27 _Bryan_ joined #gluster
14:30 wushudoin joined #gluster
14:33 harish joined #gluster
14:34 kshlm joined #gluster
14:37 lpabon joined #gluster
14:43 kdhananjay joined #gluster
14:43 shaunm joined #gluster
14:45 nbalacha joined #gluster
14:47 glusterbot News from newglusterbugs: [Bug 1202218] Disperse volume: Input/output error on nfs mount after the volume start force <https://bugzilla.redhat.co​m/show_bug.cgi?id=1202218>
14:51 chirino joined #gluster
14:51 semoule joined #gluster
14:52 virusuy joined #gluster
14:52 virusuy joined #gluster
14:54 semiosis cicero: what is the exact package version?  i'll check my old hard drive to see if i still have it
14:55 semiosis cicero: i need the full version 3.3.X-ubuntuY~releaseZ
14:55 T3 joined #gluster
14:55 semiosis cicero: if you promise to always keep a copy of external dependencies from now on ;)
14:58 semoule joined #gluster
14:59 semoule joined #gluster
15:00 semoule joined #gluster
15:05 TealS joined #gluster
15:05 DV joined #gluster
15:09 _Bryan_ joined #gluster
15:13 hamiller joined #gluster
15:15 theron joined #gluster
15:17 bennyturns joined #gluster
15:17 glusterbot News from newglusterbugs: [Bug 892808] [FEAT] Bring subdirectory mount option with native client <https://bugzilla.redhat.com/show_bug.cgi?id=892808>
15:20 anrao joined #gluster
15:30 fubada purpleidea: hi, PR going in
15:30 fubada let me know if theres anyhting else ill need to do please
15:31 wkf joined #gluster
15:44 coredump joined #gluster
15:47 glusterbot News from newglusterbugs: [Bug 1210404] BVT; Selinux throws AVC errors while running DHT automation on Rhel6.6 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1210404>
16:04 hagarth joined #gluster
16:04 Pupeno joined #gluster
16:05 Asako joined #gluster
16:05 Asako hello, has anybody successfully compiled glusterfs 3.6.2 on FreeBSD 10?
16:06 Asako getting an error when I run make.  /root/glusterfs-3.6.2/rpc/r​pc-lib/src/rpc-clnt.c:1152: undefined reference to `xdr_auth_glusterfs_parms_v2'
16:19 corretico joined #gluster
16:21 T3 joined #gluster
16:22 JoeJulian Asako: I know it gets built and tested on some flavor of bsd. I haven't run across anybody here that has claimed to use bsd yet.
16:22 Asako running FreeBSD 10.1
16:23 Asako clang is the default compiler
16:23 JoeJulian manu@netbsd.org does most of the bsd work.
16:23 JoeJulian I would try emailing him directly.
16:24 Asako ok, thanks
16:26 soumya joined #gluster
16:26 deniszh joined #gluster
16:35 nangthang joined #gluster
16:40 T3 joined #gluster
16:49 bene2 joined #gluster
16:50 tg2 is there a way to remount a volume if glusterfs process fails?
16:50 tg2 I have an intermittent crash with lots of i/o on a volume (~5-6gbps) and it just drops the endpoint
16:51 tg2 vs writing a bash script to check the mount and force remount
16:51 cicero semiosis: 3.3.2-ubuntu1~precise2 <3 <3 <3
16:52 cicero semiosis: yes i will carry an external hard drive with such dependencies
16:52 cicero semiosis: please and thank you.
16:53 Asako JoeJulian, looks like it compiles using gcc
16:53 Asako clang errors out
16:53 tg2 anything to be concerned about with excessive "entry <x> missing on subvol storage-client-x" in client logs?
16:54 DV joined #gluster
16:55 JoeJulian tg2: only if it shouldn't be missing I guess.
16:56 JoeJulian tg2: There's no built-in way to force a restart. You could make a systemd service that could do it.
16:57 theron_ joined #gluster
16:57 semiosis cicero: i found the source package.  can you build the binaries from it or do you need me to do that too?
16:58 tg2 this is the client-side heal, correct?
16:58 tg2 I think the crash is becuase its using 3.6.1 instead of 3.6.2 which the rest of the cluster is using...
16:58 tg2 i'll complie 3.6.2 from source and see if it fixes that
16:58 cicero semiosis: i can try to build from source myself
16:58 cicero semiosis: you are already doing enough
17:00 semiosis if you're not already familiar with the debian build process it would be difficult to learn.  it will take me another hour or two.  i'll let you know
17:01 cicero semiosis: ok, thank you very much
17:01 cicero i've tried to build PPAs from source in the past and i get lost in a chrooted jail of sadness
17:04 JoeJulian tg2: No, I'm pretty sure that's just a message that indicates that the client tried to open a file that wasn't there. It was a noisy log message that got added.
17:05 JoeJulian tg2: but yes, there were crash bugs in 3.6.1.
17:06 semiosis cicero: debian could not have made the packaging process harder even if they really tried
17:06 tg2 figured, only the 3.6.1 clients crash
17:06 cicero semiosis: good to know it's not just me
17:07 tg2 no 3.6.2 for precise in the ppa tho
17:07 semiosis tg2: ftbfs
17:07 semiosis @lucky ftbfs
17:07 glusterbot semiosis: http://en.wikipedia.org/wiki/FTBFS
17:07 tg2 precise wont build?
17:07 cicero heh, totally just looked that up
17:07 semiosis tg2: upgreyedd!
17:07 rotbeard joined #gluster
17:07 tg2 GYAD DAMNIT
17:07 tg2 looks like dist upgrade time
17:07 tg2 lol
17:08 semiosis @lucky upgreyedd
17:08 glusterbot semiosis: http://www.youtube.com/watch?v=t7zqwnCRPqQ
17:08 tg2 do I get bonus points for running 14.04 inside a vm inside 12.04
17:08 semiosis tg2: you get all the bonus points
17:08 semiosis tg2++
17:08 semiosis tg2++
17:08 semiosis tg2++
17:08 semiosis tg2++
17:08 glusterbot semiosis: tg2's karma is now 2
17:08 semiosis tg2++
17:08 glusterbot semiosis: tg2's karma is now 3
17:08 glusterbot semiosis: tg2's karma is now 4
17:08 glusterbot semiosis: tg2's karma is now 5
17:08 glusterbot semiosis: tg2's karma is now 6
17:08 tg2 aw giggedy
17:08 cicero haha
17:08 tg2 what if i get it to compile
17:08 tg2 how many bonus points
17:09 tg2 I noticed no libcmocka-dev
17:09 tg2 in 12.04
17:10 tg2 semiosis, https://bugzilla.redhat.co​m/show_bug.cgi?id=1206744 ?
17:10 glusterbot Bug 1206744: high, high, ---, rkavunga, POST , current glusterfs fails to build on Ubuntu Precise: 'RDMA_OPTION_ID_REUSEADDR' undeclared
17:10 alpha01_ joined #gluster
17:11 semiosis heh, neat
17:11 tg2 will it build with that patch?
17:11 semiosis i'll have to try that
17:11 tg2 I'll try
17:11 semiosis ok great
17:12 semiosis please update the bz with your results.  let me know if you need any guidance.  it will be a day or more before i have a chance to try it
17:13 Asako should I file a bug about builds failing using clang?
17:13 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:13 tg2 lol that bug might be big
17:14 JoeJulian Asako: you should, yes.
17:14 Asako I installed gcc and it works
17:14 tg2 ya that is the target compiler
17:14 JoeJulian I know there's been some interest in getting it to compile with other compilers, but not a lot of user interest.
17:15 Asako I'd love to use gluster instead of csync2
17:15 JoeJulian eek
17:15 Asako not a lot of clustered FS options in BSD unfortunately
17:16 tg2 not many
17:16 tg2 zfs
17:16 tg2 lol
17:17 tg2 what you can do if you feel like being particularly hacky is export your zvols as iscsi to a leenux host and run gluster server on that across all the "bricks"
17:17 tg2 might be cpu/network bound tho
17:18 tg2 as for a client, you can run centos inside beehive I think
17:18 Asako I'm trying to make the wordpress node autonomous
17:18 Asako besides the database at least
17:18 roost joined #gluster
17:18 tg2 ?
17:18 Asako HA wordpress
17:19 Asako the web content needs to stay in sync
17:19 tg2 right
17:19 Asako I could just make life easier and run centos :D
17:21 tg2 I have not seen the bsd guys make it run with clang yet
17:21 Asako glusterfs-3.6.2/rpc/rpc-lib/src/rpc-clnt.c:1152: undefined reference to `xdr_auth_glusterfs_parms_v2'
17:21 tg2 you on 10?
17:21 Asako if I knew how to fix that it would probably work
17:21 Asako 10.1
17:22 tdasilva joined #gluster
17:22 tg2 read: > I tried out the tarball. I had to install the following packages to get glusterfs to conifgure and compile: autogen, automake and bison. With those packages on the system, I was able to compile glusterfs using the Clang compiler on FreeBSD 10.0. The make install script worked and I was able to run glusterfs --version.
17:22 tg2 on bsd forums
17:22 Asako yeah, I've been there
17:23 tg2 doesn't work with 3.6.2 tho?
17:23 Asako I'm not sure if make is honoring my CC variable
17:24 Asako just built it using gcc on the other node
17:25 tg2 @ semiosis, Version: 1.0.14.1-2 is what is in repo for 12.04, as per patch requires at least 1.0.15 - i'll keep digging
17:25 Rapture joined #gluster
17:25 Asako make shows a lot of warnings
17:28 Asako hmm wtf
17:28 Asako how can it compile on one server and fail on the other?
17:28 Rapture joined #gluster
17:28 cicero what is the sound of one compile failing?
17:29 Asako same compiler too
17:30 hchiramm joined #gluster
17:31 Asako export CC=/usr/local/bin/gcc48; make
17:31 Asako that's all I did
17:32 tg2 semiosis, http://www.fpaste.org/209050/
17:32 tg2 :\
17:33 semiosis tg2: do you have pkg-config installed?
17:33 semiosis tg2: missing that causes weird config errors
17:33 tg2 ya
17:33 tg2 purge it?
17:33 semiosis tg2: if you do have that already, then what is the command not found?
17:33 semiosis you need pkg-config, dont purge it
17:34 tg2 wow solid retardation sorry
17:34 tg2 pasted the line numbers out of the patch
17:34 tg2 LOL
17:34 tg2 its been a long week
17:34 cicero hear you on that one
17:34 tg2 "those gotos look very weird"
17:34 cicero *raptor*
17:35 tg2 round 10
17:36 tg2 looks good configure going through
17:36 DV joined #gluster
17:36 chirino joined #gluster
17:37 tg2 so there are a few .debs you need to pull from quantal for the librdmacm 1.0.15 but its looking good so far
17:37 tg2 i'll fpaste up a compile guide
17:37 cicero good ole queasy quantal
17:38 tg2 http://i.imgur.com/ijr3s7J.png
17:38 cicero ;_;
17:39 TealS joined #gluster
17:39 tg2 maybe can put those two debs in the gluster ppa for precise so they get pulled if a user has the gluster ppa installed
17:40 dusmant joined #gluster
17:40 tg2 Asako, has anybody made that HA setup in docker?
17:41 Asako I'm sure somebody has
17:42 TealS left #gluster
17:43 Asako I'm just using plain old droplets
17:44 jermudgeon joined #gluster
17:45 tg2 fun fact, i know ben (and his brother moisey, owners/founders of digitalocean) since they started realitycheck hosting like a long ass time ago
17:45 tg2 now its calle dserverstack
17:46 Asako meh, just keeps giving me the same error
17:46 tg2 semiosis, works
17:53 Asako I've been in the hosting industry for 12 years
17:54 semiosis cicero: http://download.gluster.org/pub/gluster/glusterfs​/3.3/3.3.2/Ubuntu/glusterfs-3.3.2-precise2.tar.gz
17:55 cicero semiosis: thank you very very much
17:55 semiosis you're welcome
17:55 theron joined #gluster
17:55 cicero also i just curl'ed that into stdout like an idiot
17:55 JoeJulian Would you like to download this language plugin for your terminal?
17:56 cicero haha
17:56 Asako hmm, I think the other server actually built it with clang
17:56 cicero yes beacuse
17:56 cicero Length: 12710493 (12M) Æapplication/x-gzipÅ
17:56 cicero this tab is now crazy
17:57 cicero ok, gotten
17:57 cicero once again, #gluster is the best open source channel
17:57 * cicero bows
17:58 Asako hmm, got it
17:58 JoeJulian Thanks cicero. We try.
17:58 Asako do variables set using "set" in tcsh not apply globally?
17:59 Asako ran export CC=/usr/local/bin/gcc48 in bash and then ./configure found gcc
17:59 JoeJulian I've never used tcsh
17:59 cicero i thought tcsh died out with disco in the 70s
17:59 * cicero shrugs
18:00 JoeJulian disco hasn't died... ;)
18:00 cicero haha
18:00 semiosis ahh the good ol' days.  my first exposure to *nix was an sgi indy, with tcsh as the default shell :)
18:01 cicero nice
18:01 Asako tcsh is the default shell in FreeBSD
18:01 cicero no way
18:01 cicero when did that happen
18:01 JoeJulian Oh, wait... now that you mentioned it... I did use tcsh on an old AIX box.
18:01 Asako since always
18:01 cicero damn
18:01 Asako maybe it used to be csh
18:02 cicero i always thought it was just /bin/sh
18:02 cicero and /usr/local/bin/bash
18:02 cicero was always in userland
18:02 Asako yeah, I always install bash
18:02 Asako usually just end up changing root's default shell to it
18:02 _Bryan_ joined #gluster
18:02 JoeJulian zsh++
18:02 glusterbot JoeJulian: zsh's karma is now 1
18:02 semiosis ha
18:02 Asako my job is all CentOS, bash is all I know
18:03 semiosis JoeJulian: icymi, i recently switched to mac(!) and have been using a tricked out zsh+oh-my-zsh shell.  it's awesome
18:03 JoeJulian yum -y install zsh
18:03 JoeJulian zsh, for a developer, is practically a necessity.
18:03 cicero i'm afraid if i get used to something like zsh, i'm not gonna know how to compute in a plain ole bash
18:03 * Asako gives up
18:03 semiosis git integration, command completion for all the things, yes!
18:04 Asako there's no reason the same source code compiles on one server and doesn't compile on the other
18:04 Asako same OS, same source file, same compiler
18:04 Asako /usr/src/glusterfs-3.6.2/rpc/​rpc-lib/src/rpc-clnt.c:1152: undefined reference to `xdr_auth_glusterfs_parms_v2'
18:04 Asako collect2: error: ld returned 1 exit status
18:05 Asako some how it magically worked on the first node :D
18:05 lalatenduM joined #gluster
18:05 atrius_ joined #gluster
18:07 Asako it's an error in the serialize function
18:07 Asako In function `xdr_serialize_glusterfs_auth':
18:09 tg2 semi
18:09 tg2 http://www.fpaste.org/209067/
18:09 tg2 amen zhs command completion
18:09 tg2 zsh *
18:09 semiosis tg2: i only get notified when you use my full nick, for future reference
18:10 tg2 yeah tab completion fail ;D
18:13 Rapture joined #gluster
18:14 DV joined #gluster
18:15 ekuric joined #gluster
18:15 sage joined #gluster
18:21 theron_ joined #gluster
18:24 Rapture joined #gluster
18:25 Asako does gluster require gmake to build?
18:26 Asako seems like make runs cc1 no matter what I set CC to
18:28 Asako and then the compile fails because clang doesn't like the code
18:30 coredump joined #gluster
18:31 DV joined #gluster
18:32 dbruhn joined #gluster
18:37 rafi joined #gluster
18:37 JoeJulian There might be some folks in #gluster-dev that might know better.
18:38 JoeJulian Since most of us have only built in Linux, I'm afraid we're not going to be of any help.
18:40 rafi1 joined #gluster
18:46 Asako ok
18:46 ProT-0-TypE joined #gluster
18:47 Asako maybe I'll just stick with linux
18:50 JoeJulian ^^^ and that right there is why I use Linux instead of BSD. It's just easier and I've got shit to do.
18:51 DV joined #gluster
18:52 theron joined #gluster
18:52 Asako JoeJulian, yeah
18:52 atinmu joined #gluster
18:53 jermudgeon joined #gluster
18:53 Asako BSD is like the red-headed stepchild
18:54 theron joined #gluster
18:54 Prilly joined #gluster
19:01 theron_ joined #gluster
19:09 Asako [root@wrdp2 ~]# strings -a /usr/local/sbin/gluster | grep -i clang
19:09 Asako FreeBSD clang version 3.4.1 (tags/RELEASE_34/dot1-final 208032) 20140512
19:09 Asako hmm
19:09 Asako so why can't I build it on my 2nd server?  It's the same code!
19:10 cicero have you tried ktracing on both boxes and then diffing those transcripts?
19:11 cicero or truss
19:11 cicero i dunno which one
19:12 Asako first one gave the same error and then worked after a few build attempts
19:14 Asako time to nuke it
19:19 theron joined #gluster
19:20 theron_ joined #gluster
19:23 JoeJulian Asako: sounds like maybe there's a race condition doing parallel builds?
19:23 Asako seems like it's just an undefined reference
19:23 Asako but I don't know how to fix it
19:28 JoeJulian but it's not always undefined. Since it's defined in the source, that suggests to me that when the part that fails tries to reference it, it's not there at that moment.
19:29 JoeJulian Subsequent builds race to that point differently and, occasionally, the reference does exist at that moment in time so the build succeeds.
19:33 Asako hmm
19:34 Asako doesn't seem to matter which compiler I use
19:35 papamoose JoeJulian: I wish I had seen this yesterday: https://github.com/joejulian/python-gluster
19:35 papamoose JoeJulian: you are about to save me a ton of time. :)
19:42 JoeJulian papamoose: Sorry, but not a ton.
19:43 JoeJulian That was a start, but then I had to go a different direction.
19:44 JoeJulian If you want to continue it, though, I'd be happy to help.
19:46 tg2 Asako, probably dirty build directory
19:46 tg2 on first box
20:00 Asako could be
20:00 Asako I ran make clean before every build
20:03 JoeJulian Asako:  fyi, manu's answering emails right now. It might be a good time to hit him up.
20:03 Asako ok
20:04 Asako I'll send him a message
20:06 DV joined #gluster
20:07 roost joined #gluster
20:10 _ndevos joined #gluster
20:13 shaunm joined #gluster
20:23 Asako hmm, it works between freebsd and centos 7
20:43 Asako will mess with it later, thanks
20:54 georgeh-LT2 joined #gluster
21:06 badone_ joined #gluster
21:18 DV joined #gluster
21:33 wkf joined #gluster
21:44 plarsen joined #gluster
21:53 tessier Ugh....building a new gluster cluster. The mount -t glusterfs 10.0.2.143:/export/diska/brick /gluster/ command just hangs. And if I try to ls /gluster that hangs forever too. The box has to be rebooted. :(
21:56 JoeJulian tessier: Check your client logs. That does sound unexpected.
21:56 JoeJulian tessier: You can avoid the reboot, though, by just killing the glusterfs client.
21:56 tessier [2015-04-09 21:51:05.377025] I [client-handshake.c:1210:client_setvolume_cbk] 0-diska-client-1: Server and Client lk-version numbers are not same, reopening the fds
21:56 glusterbot tessier: This is normal behavior and can safely be ignored.
21:57 JoeJulian If you need a second pair of eyes, paste it to fpaste.org.
21:58 tessier http://fpaste.org/209164/42861667/
21:59 JoeJulian Odd that it should lock up fuse. It seems to fail quite reasonably.
21:59 JoeJulian "gluster volume list"
21:59 JoeJulian What's the name of your volume?
22:00 tessier # gluster volume list
22:00 tessier diska
22:00 tessier That's it. diska.
22:00 JoeJulian Then use "diska" instead of "/export/diska/brick" in your mount command.
22:00 JoeJulian "bricks" are for glusterfs' use only. You now use the volume.
22:01 tessier Ah.
22:01 tessier That's got to be the problem.
22:01 JoeJulian I'm sure.
22:02 tessier However, the gluster client now seems unkillable.
22:02 JoeJulian What instructions are you following? That really needs to be made more clear.
22:02 tessier Leave it to me to find an edge case or unexpected user behavior. :)
22:02 JoeJulian hehe
22:02 tessier https://www.gluster.org/docume​ntation/Getting_started_rrqsg/ which is probably clear enough, I just didn't read carefully. I think putting multiple commands with ; is probably not a good idea.
22:03 JoeJulian I agree.
22:04 JoeJulian "Note: This example assumes Fedora 16" wtf? Might as well be expecting slackware...
22:04 coredump joined #gluster
22:04 JoeJulian Man... that whole thing is old.
22:04 tessier Really looking forward to getting gluster all working properly to replace my overly complicated iscsi/mdadm setup.
22:05 JoeJulian yikes. yeah. I thought about doing that once, before I found gluster.
22:05 tessier I've been running iscsi/mdadm for years and years.
22:11 ttkg joined #gluster
22:13 DV joined #gluster
22:15 theron joined #gluster
22:16 tessier joined #gluster
22:16 theron joined #gluster
22:19 DV joined #gluster
22:20 papamoose1 joined #gluster
22:22 dockbram joined #gluster
22:23 dockbram joined #gluster
22:27 bene2 joined #gluster
22:29 Larsen_ joined #gluster
22:30 mrEriksson joined #gluster
22:45 hchiramm_ joined #gluster
22:52 mrEriksson joined #gluster
22:53 Larsen_ joined #gluster
23:02 dastar joined #gluster
23:04 Larsen_ joined #gluster
23:35 jermudgeon joined #gluster
23:35 ninkotech_ joined #gluster
23:49 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary