Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 neofob joined #gluster
00:10 PeterA keep getting more and more
00:10 PeterA ] E [server-rpc-fops.c:1535:server_open_cbk
00:10 PeterA what does this means?
00:12 m0zes joined #gluster
00:16 PeterA is that safe to ignore?
00:16 sputnik13 joined #gluster
00:59 topshare joined #gluster
00:59 topshare_ joined #gluster
01:01 PeterA seems like there is a bigger prob i found on gluster
01:02 PeterA the git fsync doesn't work on glusterfs :(
01:02 vimal joined #gluster
01:07 pradeepto_ joined #gluster
01:07 pradeepto_ joined #gluster
01:08 PeterA _dist u here?
01:12 PeterA anyone here used glusterfs client for NFS proxy?
01:13 PeterA i am NFS sharing gfs with glusterfs client and getting error on lockf and fsync
01:13 PeterA with permission denied
01:20 pradeepto Hi
01:20 glusterbot pradeepto: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
01:21 lyang0 joined #gluster
01:21 pradeepto Any body here can help me with a single Gluster server setup on AWS EC2/EBS?
01:23 pradeepto After installing various packages etc, when I try to create a volume, I get the following error :
01:23 pradeepto sudo gluster volume create media-volume transport tcp x.x.x.x:/opt/gluster volume create: media-volume: failed: Host x.x.x.x is not in 'Peer in Cluster' state
01:24 pradeepto SELinux is in permissive mode and I have opened all necessary ports on the security group.
01:40 jjahns joined #gluster
01:41 jjahns running latest gluster and i'm seeing a problem reading from striped volume (2 bricks 2 servers)
01:41 jjahns write is fine
01:41 jjahns just testing things out
01:41 jjahns i get error message "transport endpoint not connected"
01:45 jjahns any place i can check to troubleshoot (kinda new at Gluster)
01:58 evanjfraser joined #gluster
02:30 julim joined #gluster
03:10 pradeepto_ joined #gluster
03:10 pradeepto_ joined #gluster
03:11 haomaiwa_ joined #gluster
03:13 haomaiw__ joined #gluster
03:15 _Bryan_ joined #gluster
03:19 plarsen joined #gluster
03:31 haomaiwa_ joined #gluster
03:37 plarsen joined #gluster
03:45 hagarth joined #gluster
03:47 agen7seven joined #gluster
03:52 haomaiw__ joined #gluster
03:53 RameshN joined #gluster
03:55 jjahns what kind of performance are people getting on 10 GbE?
03:56 kdhananjay joined #gluster
03:58 kanagaraj joined #gluster
03:59 shubhendu joined #gluster
04:03 itisravi joined #gluster
04:07 Humble joined #gluster
04:08 itisravi joined #gluster
04:10 ppai joined #gluster
04:11 hchiramm_ joined #gluster
04:14 prasanth_ joined #gluster
04:14 kanagaraj joined #gluster
04:18 lalatenduM joined #gluster
04:21 meghanam joined #gluster
04:24 rastar joined #gluster
04:28 nullck joined #gluster
04:29 Rafi joined #gluster
04:32 anoopcs joined #gluster
04:32 anoopcs joined #gluster
04:33 nbalachandran joined #gluster
04:38 nishanth joined #gluster
04:38 jjahns anyone?
04:41 PeterA yes
04:41 kshlm joined #gluster
04:41 PeterA i m having issue with gluster too :)
04:43 spandit joined #gluster
04:43 atalur joined #gluster
04:47 ramteid joined #gluster
04:49 atinmu joined #gluster
04:52 kanagaraj joined #gluster
04:55 jiffin joined #gluster
04:57 atinmu joined #gluster
04:59 kanagaraj joined #gluster
04:59 dusmantkp_ joined #gluster
04:59 shubhendu_ joined #gluster
05:01 kumar joined #gluster
05:10 nshaikh joined #gluster
05:13 hchiramm__ joined #gluster
05:17 topshare joined #gluster
05:25 hagarth joined #gluster
05:29 kdhananjay joined #gluster
05:29 latha joined #gluster
05:32 topshare joined #gluster
05:35 haomaiwa_ joined #gluster
05:44 cristov_mac joined #gluster
05:44 agen7seven Is there anyone here running Gluster as a storage backend for asterisk?
05:49 bala joined #gluster
05:50 cristov_mac hi~ #gluster. i'm just trying to find kernel tunning point for glusterfs. so i surfing gluster community and find somthing usefull document. but there are missing page http://community.gluster.org/a/li​nux-kernel-tuning-for-glusterfs/ where is this document? anyone know?
05:51 atinmu joined #gluster
05:56 karnan joined #gluster
05:58 Lee- joined #gluster
05:58 anoopcs cristov_mac: http://www.gluster.org/community/docum​entation/index.php/Linux_Kernel_Tuning
05:58 glusterbot Title: Linux Kernel Tuning - GlusterDocumentation (at www.gluster.org)
05:58 karnan joined #gluster
05:59 anoopcs cristov_mac: Was that helpful?
06:03 cristov_mac #anopcs great!! :very good!! thank you~ ^^
06:10 kanagaraj_ joined #gluster
06:12 kanagaraj_ joined #gluster
06:15 anoopcs cristov_mac: wc
06:29 pradeepto For running glusterfs on AWS EC2+EBS, is tcp the correct transport while creating volumes?
06:31 kanagaraj joined #gluster
06:32 pradeepto I have created a volume with 2 replicas and I can see the status seems to OK. But I don't see the files I have created in the volume (i.e mounted folder) on the first brick, when I look for it in the second brick
06:32 pradeepto What am I doing wrong here?
06:35 kanagaraj joined #gluster
06:36 atinmu joined #gluster
06:39 kanagaraj_ joined #gluster
06:39 kanagaraj_ joined #gluster
06:45 kanagaraj__ joined #gluster
06:49 kke is there some way i could mount volumes from two different glusterfs versions on a single host? (old one doesn't accept new clients, new one doesn't accept old clients)
06:53 saurabh joined #gluster
06:55 Jamoflaw in the docs it mentions when a file is expected on a brick but, due to fix-layout or similar, the file is actually on another node. In the docs it suggests that the server broadcasts to the other nodes. Can anyone confirm if this is udp broadcast traffic or if it is targetted traffic to all nodes?
06:58 topshare joined #gluster
06:59 topshare joined #gluster
07:01 ctria joined #gluster
07:01 jjahns i think its targeted - i'm looking at tcpdump on my testing right now
07:01 jjahns not 100%
07:04 Jamoflaw Kk I did think targetted would be more sensible
07:05 Jamoflaw will make a difference in vlan placement of the cluster if its broadcast
07:05 topshare joined #gluster
07:07 Jamoflaw Thx btw :)
07:12 glusterbot New news from newglusterbugs: [Bug 1131846] remove-brick - once you stop remove-brick using stop command, status says ' failed: remove-brick not started.' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1131846>
07:13 kanagaraj joined #gluster
07:18 haomaiwa_ joined #gluster
07:26 prasanth_ joined #gluster
07:28 andreask joined #gluster
07:34 itisravi_ joined #gluster
07:35 hagarth joined #gluster
07:43 topshare joined #gluster
07:44 vimal joined #gluster
07:47 ricky-ti1 joined #gluster
07:50 raghu` joined #gluster
07:53 lalatenduM joined #gluster
07:54 hchiramm_ joined #gluster
07:56 RameshN joined #gluster
07:57 nishanth joined #gluster
07:58 liquidat joined #gluster
08:15 hagarth joined #gluster
08:22 Lock-Aze joined #gluster
08:22 pdrakeweb joined #gluster
08:31 prasanth_ joined #gluster
08:31 topshare_ joined #gluster
08:35 karnan joined #gluster
08:40 kshlm joined #gluster
08:43 RameshN joined #gluster
08:48 ppai joined #gluster
08:50 Guest4988 joined #gluster
08:56 topshare joined #gluster
09:08 shubhendu_ joined #gluster
09:12 Slashman joined #gluster
09:25 Humble joined #gluster
09:33 Rafi_kc joined #gluster
09:39 ProT-0-TypE joined #gluster
09:41 LebedevRI joined #gluster
10:02 stickyboy joined #gluster
10:07 kanagaraj joined #gluster
10:12 deepakcs joined #gluster
10:13 rolfb joined #gluster
10:15 kanagaraj joined #gluster
10:16 spandit joined #gluster
10:19 kdhananjay joined #gluster
10:28 rastar joined #gluster
10:34 hagarth joined #gluster
10:40 kdhananjay joined #gluster
10:54 shubhendu_ joined #gluster
10:56 rastar joined #gluster
10:59 [LINKEDINLOGSRZA joined #gluster
11:08 liquidat joined #gluster
11:23 chirino joined #gluster
11:23 rastar joined #gluster
11:26 chirino_m joined #gluster
11:35 shubhendu_ joined #gluster
11:37 Guest4988 joined #gluster
11:41 chirino joined #gluster
11:42 kshlm joined #gluster
11:44 prasanth_ joined #gluster
11:44 ppai joined #gluster
11:47 chirino joined #gluster
11:50 diegows joined #gluster
11:50 chirino joined #gluster
11:51 chirino joined #gluster
11:52 ctria joined #gluster
11:54 chirino_m joined #gluster
11:58 ira joined #gluster
11:59 kkeithley gluster community meeting in 2 (two) minutes in #gluster-meeting@freenode
12:01 Slashman_ joined #gluster
12:06 Norky joined #gluster
12:09 rastar joined #gluster
12:10 todakure joined #gluster
12:11 ninthBit joined #gluster
12:12 ninthBit JoJulian, semiosis, and everyone else that helped along the way our production Gluster steup is up and running smoothly again.
12:12 ninthBit thank you guys!
12:13 ninthBit I have some development work to knock out but I do plan to document the issue we had with Gluster which was caused by different UIDs on each server :)  a mistake we caused but I think something in cluster could be done to help identify this issue when files are reported in the heal status.
12:13 kshlm joined #gluster
12:15 hchiramm_ ninthBit,  its awesome if u can open a bug on the same :)
12:17 vkoppad joined #gluster
12:17 kanagaraj joined #gluster
12:17 qdk joined #gluster
12:18 ninthBit hchiramm_: yes that will be the plan when i can lock down the exact method to replicate it.  we were able to replicate it in production but i would like to have a script that can replicate it on demand for you guys :)
12:19 hchiramm_ ninthBit, oh.ok.. that would be great!
12:19 hchiramm_ ninthBit++
12:19 glusterbot hchiramm_: ninthBit's karma is now 1
12:20 edward1 joined #gluster
12:21 atinmu joined #gluster
12:21 ctria joined #gluster
12:24 ninthBit semiosis: again, i would like to thank you for pushing the update to 3.4 for 3.4.5 that fixed the stability issue we had with our cluster. (B)
12:25 meghanam joined #gluster
12:27 itisravi joined #gluster
12:40 kumar joined #gluster
12:44 kanagaraj joined #gluster
12:46 kshlm joined #gluster
12:46 kshlm joined #gluster
12:47 necrogami joined #gluster
13:04 todakure joined #gluster
13:09 julim joined #gluster
13:14 plarsen joined #gluster
13:25 topshare joined #gluster
13:31 todakure i need to install glusterfs 3.4.5 on ubuntu, but how I get the ppa to install ? the default version on 14.04 is 3.4.2
13:32 B21956 joined #gluster
13:32 ninthBit todakure: if you are using semiosis's repository he has instructions on that here https://launchpad.net/~semiosis/+a​rchive/ubuntu/ubuntu-glusterfs-3.4
13:32 glusterbot Title: ubuntu-glusterfs-3.4 : Louis Zuckerman (at launchpad.net)
13:33 ninthBit adding this ppa to your system
13:34 todakure ninthBit: :D
13:38 DV__ joined #gluster
13:42 tdasilva joined #gluster
13:50 gmcwhistler joined #gluster
13:53 nshaikh joined #gluster
13:56 kanagaraj joined #gluster
14:05 theron joined #gluster
14:18 andreask joined #gluster
14:21 wushudoin joined #gluster
14:23 plarsen joined #gluster
14:23 mojibake joined #gluster
14:32 cristov joined #gluster
14:32 topshare joined #gluster
14:41 m0zes joined #gluster
14:42 SteveCooling joined #gluster
14:50 Thilam|work I would like to tune the cache settings in my gluster cluster
14:51 Thilam|work I have seen there is 3 parameters : performance.cache-max-file-size, performance.cache-min-file-size and performance.cache-size
14:51 Thilam|work to set the cache size
14:52 Thilam|work the two first is about write cache and the last for read cache (as I understand it)
14:52 Thilam|work I would like to understand how does it work
14:53 Thilam|work cache uses the RAM of the servers
14:54 Thilam|work If I put 4GB for performance.cache-size and I have 2 servers, I need 2GB of dedicated memory on each server ?
14:54 Thilam|work (it is set on the volume, not on the bricks)
14:54 Thilam|work (same question for cache-max-file, cache-min-file)
14:54 plarsen joined #gluster
14:57 Thilam|work is there some more doc than this page : http://gluster.org/community/documentation/i​ndex.php/Gluster_3.2:_Setting_Volume_Options ?
14:57 glusterbot Title: Gluster 3.2: Setting Volume Options - GlusterDocumentation (at gluster.org)
14:57 mojibake http://gluster.org/community/documentation/i​ndex.php/Gluster_3.2:_Tuning_Volume_Options
14:57 glusterbot Title: Gluster 3.2: Tuning Volume Options - GlusterDocumentation (at gluster.org)
14:58 mojibake I can't answer the question about if the cache is in the ram or not..
14:58 mojibake But the max lifetime according to the tuning parameters is only 60seconds. Default 1.
14:59 bene2 joined #gluster
14:59 mojibake But the min-file-size and max-file-size, the link i posted has good enough explanation.
15:00 mojibake I also had question previously myself if direct-io=false was needed for the performance cache to be use.
15:00 jobewan joined #gluster
15:01 mojibake Thilam|work: JoeJulian is your man. Wait for him to respond to your question if he is available.
15:04 Thilam|work ok thx
15:04 rotbeard joined #gluster
15:05 Thilam|work your link is the same I've posted just earlier
15:06 Thilam|work bug my real question is "what should I do the hardware of the server if I change these values"
15:06 Thilam|work in order changes to be realistic
15:17 hchiramm__ joined #gluster
15:20 plarsen joined #gluster
15:28 LHinson1 joined #gluster
15:38 daMaestro joined #gluster
15:39 hchiramm__ joined #gluster
15:53 sputnik13 joined #gluster
15:57 semiosis todakure: re: unable to add PPA, try the new PPA, ppa:gluster/glusterfs-3.4
15:57 hagarth joined #gluster
15:59 todakure semiosis: ok. fix in the web page :D
15:59 semiosis soon
16:00 semiosis the old PPA should still work.  no idea why it did not
16:00 todakure I try now but: root@node-01:~# add-apt-repository ppa:gluster/glusterfs-3.4 Cannot add PPA: 'ppa:gluster/glusterfs-3.4'. Please check that the PPA name or format is correct.
16:00 semiosis something is wrong with your system
16:00 semiosis hmm
16:01 todakure i need to go now, I back more later
16:01 semiosis ok
16:01 todakure 1h later
16:01 semiosis that command works fine for me
16:01 todakure :P
16:05 PeterA joined #gluster
16:06 PeterA i have issue on non-root user not able to cp or create read-only files on glusterfs over rpc/nfs
16:16 PeterA http://pastie.org/9489298
16:16 glusterbot Title: #9489298 - Pastie (at pastie.org)
16:16 PeterA keep throwing out error when try to do a cp on a 444 file
16:16 PeterA permission denied on OPEN
16:21 dtrainor joined #gluster
16:26 dtrainor joined #gluster
16:33 PeterA seems like i am hitting a bug 2046 and 2058 that failure to copy read-only file to gluster volume as non-root user
16:33 glusterbot Bug https://bugzilla.redhat.co​m:443/show_bug.cgi?id=2046 medium, medium, ---, dkl, CLOSED CURRENTRELEASE, pam_console does not take arguments
16:33 PeterA which suppose to be fix on 3.1.1
16:34 PeterA seems like this bug is happening on 3.5.2?
16:34 semiosis oldbug 2046
16:34 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=763778 medium, low, 3.1.1, raghavendra, CLOSED DUPLICATE, tar xf operates different on gluster than other filesystems
16:34 semiosis oldbug 2058
16:34 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=763790 medium, high, 3.1.1, aavati, CLOSED CURRENTRELEASE, posix permission compliance error
16:34 PeterA ya
16:36 ndevos PeterA: how many groups does your non-root user have?
16:36 kumar joined #gluster
16:36 PeterA one
16:36 ndevos ah, that should be an easy case :)
16:37 PeterA to make this clear, i am having issue on cp read-only file to and in gluster volume over NFS
16:37 PeterA and i am mounting gfs on the nfs server
16:37 PeterA not using gluster nfs
16:37 PeterA and i wonder where should i set the access control on gluster volume
16:38 ndevos uh, you mount a gluster volume over glusterfs-fuse, and export that directory again?
16:38 PeterA export that over NFS
16:38 ndevos the kernel NFS server, or a userspace one?
16:38 PeterA userspace
16:38 PeterA ubuntu one
16:39 semiosis ??!!!
16:39 PeterA nfs-kernel-server
16:39 PeterA what? :)
16:39 semiosis yeah, that's not userspace
16:39 semiosis that's kernel
16:39 semiosis says right in the name
16:39 PeterA ya kernel
16:39 PeterA sorry
16:39 bene2 joined #gluster
16:39 ndevos well, the kernel nfs server and fuse do not really work together very well
16:39 semiosis i was ??!!! at the idea of a userspace ubuntu nfs server :)
16:39 PeterA haha sorry
16:40 PeterA i have been exhausted after a long day
16:40 PeterA long story short is the userspace on 3.5.1 been buggy
16:40 PeterA and i am trying to use the kernel one to proxy glusterfs to user
16:40 ndevos the fuse package contains a README.nfs with its limitation related to the kernel nfs server
16:41 PeterA hmm where?
16:41 ndevos the gluster/nfs server on 3.5.1 has nfs.drc enabled by default, that is indeed problematic and you should have a better experience if you disable that
16:41 PeterA i already did
16:41 PeterA for nfs.drv off
16:41 PeterA on all volume
16:41 PeterA but still problematic
16:41 ndevos /usr/share/doc/fuse-2.8.3/README.NFS on my RHEL6
16:42 semiosis ndevos: no fuse package for debs.  where is that file in the src tree?
16:42 semiosis oh you mean the system fuse package, not glsuterfs-fuse
16:42 semiosis ok
16:42 ndevos http://sourceforge.net/p/fuse/​fuse/ci/master/tree/README.NFS
16:42 glusterbot Title: Filesystem in Userspace / fuse / [e3b7d4] /README.NFS (at sourceforge.net)
16:43 _dist joined #gluster
16:44 _dist I was wondering if anyone here can verify for me that it's essential for qemu to be compiled as a libgfapi client on the same source as the gluster version being used. I'm running into some odd stuff on a 1.7.1 compile of qemu (for libgfapi) but compile over 3.4.2 running on a 3.5.2 server
16:44 PeterA README.nfs is really not much…
16:44 ndevos PeterA: did you file a bug about the troubles you have with gluster/nfs, or is there an other existing one?
16:44 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:45 PeterA the problem is it seems works on the userspace nfs
16:45 PeterA altho is has other issue like the quota and setattr issue
16:45 PeterA that's why i am trying to work on the kernal nfs
16:45 PeterA and hitting this "old" bug with kernel nfs and glusterfs
16:47 ndevos PeterA: what setattr issue? that should not be a problem for gluster/nfs...
16:47 dtrainor joined #gluster
16:47 ndevos I'm not sure about quota, there were many bug-fixes with that
16:48 PeterA E [marker.c:2482:marker_setattr_cbk] 0-sas03-marker: Operation not permitted occurred during setattr of /TrafficPrDataFc01/TrafficReportsAgency/​bse/trdtl_bse_20140820030000_613.bcp.gz
16:49 ndevos _dist: yes, it is highly recommended to use the same version of libgfapi where you compiled qemu against - a newer version of libgfapi when compiling against an older may work, but I doubt anyone tested that
16:50 plarsen joined #gluster
16:50 ndevos _dist: in future, there is a more strict dependency, the libgfapi.so will be versioned so that it is not possible to mix versions
16:50 PeterA i think _dist is also using NFS proxy with the kernel nfs?
16:52 ndevos PeterA: hmm, that setattr does not really show *why* the permission was denied, you would need to check the owner/uid/gid of file on the brick and the uid/gid on the client side
16:52 PeterA the file and owner are pretty generic like 644 and regular
16:52 ndevos PeterA: but that seems to be an issue on the brick side, so I would expect to see it when mounting over glusterfs-fuse and with nfs
16:53 PeterA the bigger problem is why the kernel NFS doesn't allow read-only for non-root user
16:53 zerick joined #gluster
17:13 plarsen joined #gluster
17:14 glusterbot New news from newglusterbugs: [Bug 1132105] Outdated glusterfs-hadoop Install Instructions <https://bugzilla.redhat.co​m/show_bug.cgi?id=1132105>
17:16 todakure semiosis: I'm back
17:19 LHinson joined #gluster
17:21 kshlm joined #gluster
17:31 LHinson1 joined #gluster
17:37 semiosis todakure: ok, not sure how to help though.  some problem with your system :(
17:37 clyons joined #gluster
17:38 todakure semiosis: the name of ppa is correct ? [ ppa:gluster/glusterfs-3.4 ]
17:38 jjahns thats not right
17:38 jjahns one sec
17:39 jjahns if you're doing ubuntu
17:39 jjahns its ppa:semiosis/ubuntu-glusterfs-3.4
17:40 jjahns on a side note - anyone getting libvirt file locking issues when trying to launch VMs onto gluster fuse?
17:43 kkeithley ,,(ppas)
17:43 glusterbot I do not know about 'ppas', but I do know about these similar topics: 'ppa'
17:43 kkeithley ,,(ppa)
17:43 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
17:44 glusterbot New news from newglusterbugs: [Bug 1132116] cli: -fsanitize heap-use-after-free error <https://bugzilla.redhat.co​m/show_bug.cgi?id=1132116>
17:45 todakure Cannot add PPA: 'ppa:semiosis/ubuntu-glusterfs-3.4'. Please check that the PPA name or format is correct.
17:45 todakure Cannot add PPA: 'ppa:gluster/glusterfs-3.4'. Please check that the PPA name or format is correct.
18:01 ctria joined #gluster
18:08 ira joined #gluster
18:09 nbalachandran joined #gluster
18:11 semiosis jjahns: thats the new PPA
18:11 semiosis jjahns: i'll make the announcement soon, still a few things i want to do, but the pacakges are the same
18:11 semiosis todakure: i tried the same command and it works here
18:12 todakure =O.o=
18:13 semiosis todakure: what version of ubuntu are you using?
18:13 todakure 14.04
18:14 semiosis hmm strange, i tried with that & 12.04 and both worked for me
18:14 semiosis todakure: can you try copying & pasting this, ppa:gluster/glusterfs-3.4 instead of retyping?
18:14 PsionTheory joined #gluster
18:18 todakure wait. I check my network config
18:23 PeterA how do i enable debug on gluster volume?
18:23 semiosis set the ,,(options)
18:23 glusterbot See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
18:24 semiosis the diagnostics options
18:25 PeterA key = "diagnostics.count-fop-hits" ??
18:25 semiosis no, the log level ones
18:26 PeterA i m sorry….may i know which one?
18:26 PeterA i cant find it on the page...
18:26 PeterA i tried diagnostics.brick-log-level
18:27 PeterA diagnostics.brick-sys-log-level
18:27 PeterA and all those are set at INFO
18:27 semiosis see the first part of that ,,(options) message, 'gluster volume set help'
18:27 glusterbot See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
18:28 mbukatov joined #gluster
18:28 nbalachandran joined #gluster
18:33 jobewan joined #gluster
18:33 andreask joined #gluster
18:35 LHinson joined #gluster
18:37 semiosis todakure: did you solve it?
18:37 todakure my DNS error
18:38 semiosis ok glad you figured it out
18:38 todakure my too :D
18:39 LHinson joined #gluster
18:41 LHinson1 joined #gluster
18:42 jjahns i wish i could figure out why i have these connection issue with virsh
18:44 ndevos jjahns: have you read http://www.gluster.org/community/documenta​tion/index.php/Libgfapi_with_qemu_libvirt ?
18:44 glusterbot Title: Libgfapi with qemu libvirt - GlusterDocumentation (at www.gluster.org)
18:44 jjahns would like to set that up - but i think my problems are related to something else
18:45 ndevos jjahns: it explains how to allow unprivileged ports to connect to glusterd and the bricks, that is the most important part
18:50 jjahns right now i have fuse and that appears to be working just fine - but the problem i'm running into is something else
18:50 jjahns i think
18:55 _Bryan_ joined #gluster
18:55 plarsen joined #gluster
19:03 hchiramm__ joined #gluster
19:20 plarsen joined #gluster
19:22 _dist does gluster 3.5x run on a different port or prot by default from 3.4.2 ?
19:23 semiosis same port, same prot
19:23 _dist semiosis: thanks, I'm trying to get to the bottom of why qm-server thinks it's conncetions are timing out when the VM starts and works fine
19:30 _dist @seen JoeJulian
19:30 glusterbot _dist: JoeJulian was last seen in #gluster 4 days, 0 hours, 5 minutes, and 27 seconds ago: <JoeJulian> tobias-_: It's still the same with 3.4. Just make sure the populated brick is the left-hand brick when defining the volume.
19:47 LHinson joined #gluster
20:05 sputnik13 joined #gluster
20:10 JoeJulian boo
20:18 JoeJulian @lucky qm-server
20:18 glusterbot JoeJulian: http://www.cisco.com/c/dam/en/us/td/docs/vo​ice_ip_comm/cust_contact/contact_center/wor​kforce_optimization/qm_9x/installation/guid​e/qm90-installation-user-guide-cisco.pdf
20:19 JoeJulian _dist: no clue... doubt that link is of any value either...
20:19 semiosis probably this, https://pve.proxmox.com/wiki/Qm_manual
20:19 glusterbot Title: Qm manual - Proxmox VE (at pve.proxmox.com)
20:20 JoeJulian Oh, well there's the problem...
20:21 JoeJulian left #gluster
20:21 JoeJulian joined #gluster
20:27 andreask joined #gluster
20:31 aronwp joined #gluster
20:36 aronwp hey guys, Need some help. Upon reboot nginx fails to start because the gluster mount is not fully loaded even though gluster is set to start before nginx
20:37 aronwp how can i start gluster a little earlier or delay nginx a little
20:39 aronwp here is my boot log http://pastie.org/9489866
20:39 glusterbot Title: #9489866 - Pastie (at pastie.org)
20:40 aronwp even though nginx fails on boot it starts right up if I wait a little and start it manually.
20:40 semiosis aronwp: what distro?
20:40 aronwp Ubuntu 14.04.1 LTS
20:41 semiosis mounting glusterfs from localhost or remote server?
20:44 aronwp localhost on both machines in front of a load balancer
20:44 daMaestro joined #gluster
20:44 semiosis this is going to be tricky
20:45 semiosis what do you mean 'nginx fails to start'?
20:45 semiosis does it just die completely?  or does it start up but serve errors?
20:46 aronwp upon starting nginx the file/directory is not found since from what I believe the storage pool is not fully mounted so it fails to start
20:47 aronwp I'm using gluster for file storage for WordPress websites
20:48 aronwp so when nginx fails to find the directory it fails to start.
20:48 semiosis aronwp: can you just make the directory on your root partition, so nginx can start, then when the mount happens (it just takes some time after boot) it will override the root dir
20:49 semiosis if that works, probably the best option
20:53 aronwp not sure I understand. My mount is /storage-pool/ in the root directory. inside I have my /www/ folder with about 35 individual site folders.
20:53 todakure left #gluster
20:53 systemonkey joined #gluster
20:53 aronwp My nginx file points to /var/www/domain.com/
20:54 semiosis and your gluster mount is /var/www right?
20:54 semiosis so unmount the gluster mount, then mkdir /var/www/domain.com
20:54 semiosis so now domain.com exists both on your root partition & on the gluster volume
20:54 semiosis so nginx can start up whether gluster is mounted or not
20:55 aronwp no, I have a www symlink in /var/ pointing to /storage-pool/www/
20:55 semiosis ok fine, same deal
20:56 semiosis do you get what i'm saying?
20:58 aronwp ya i understand but how would the mount take over after it mounts
20:58 semiosis it just works that way
20:59 semiosis whatever is under a mount point on your root partition gets hidden by the new filesystem when you mount
21:00 semiosis try it
21:01 aronwp ok I see that working if the actual mount is /var/www/ but I have a symlink in that directory to the mount. can i have a www folder and a www symlink in the same directory
21:02 semiosis no
21:02 aronwp I gues what I can do is change all the nginx files to to go to the storage pool and do what your saying
21:02 semiosis no
21:02 aronwp or am i completely missing your point =(
21:03 semiosis if your gluster mount is /storage-pool, then unmount the gluster mount and do mkdir -p /storage-pool/www/domain.com or whatever
21:03 aronwp ah
21:03 semiosis that's regardless of any nginx config or symlinks in var
21:03 aronwp i feel very slow, now that makes more sence
21:03 semiosis hah np
21:03 plarsen joined #gluster
21:03 semiosis try it & let me know
21:04 aronwp trying it now
21:11 _dist JoeJulian: I agree with your "there's the problem" :) but honestly, the only alternative is CLI. However, I'm certain now it's purely a qm problem
21:13 _dist so because I can't hot add bricks, I've got an 8 step plan to get my data on a new replicate volume that will be my prod _until_ I can get proxmox to get 3.5.2 working with their qm
21:15 _dist (which in the end will leave me with the healing issue, that afaik is only cosmetic and I use getfattr to see the truth
21:19 semiosis aronwp: progress?
21:20 aronwp few more min
21:22 aronwp I need to do it for all domains
21:25 aronwp along with directories for sites with ssl and log files all called in the nginx files but this should defiantly work. I tested with a file and it disapears when the mount is mounted and comes back when unmounted
21:25 aronwp pretty cool
21:26 semiosis it's not the prettiest solution, but better than the alternatives imo
21:27 semiosis you're storing config files (ssl) and log files in the same gluster volume as your web content?
21:28 semiosis that seems unusual
21:28 marmalodak joined #gluster
21:31 plarsen joined #gluster
21:31 aronwp ya /storage-pool/domain.com/logs   /storage-pool/domain.com/ssl (if needed) /storage-pool/domain.com/public_html/
21:33 aronwp the only issue I se is every time I create a new site I have to unmount on each server and create the domain then mount again
21:33 aronwp kinda sucks, what would the alternative be?
21:34 aronwp I was thinking of switching from gluster to just a basic nfs file server but I think gluster is a better choice if it works better
21:37 semiosis ideally nginx could be made to not die
21:38 semiosis just serve errors when a file/dir is not found
21:38 semiosis idk nginx tho
21:45 agen7seven joined #gluster
21:59 sputnik13 joined #gluster
22:09 sputnik13 joined #gluster
22:19 semiosis aronwp: for example, on apache if DocumentRoot doesn't exist, server logs a warning & serves 404 errors to any reqs for that site.  later if the dir appears, requests magically start working OK
22:20 semiosis i'd hope nginx can do the same
22:21 aronwp that would be ideal
22:21 aronwp I found another workaround too
22:22 aronwp i added sleep 30 && service nginx start to the rc.local file and even though nginx fails the 1st time at boot it starts after the 30 second delay
22:24 semiosis well there you go
22:25 aronwp still seams hacky but i think this will be easier to maintain in the long run
22:25 semiosis makes sense
22:43 semiosis i'll be afk most of the time until next week.  leave me a message with glusterbot or on twitter.
22:53 aronwp semiosis thanks for your help
22:59 aronwp semiosis: thanks
23:10 plarsen joined #gluster
23:12 atrius joined #gluster
23:40 portante joined #gluster
23:43 DJCl34n joined #gluster
23:44 DJClean joined #gluster
23:45 georgeh|workstat joined #gluster
23:46 MacWinner joined #gluster
23:47 ndevos joined #gluster
23:47 ndevos joined #gluster
23:52 Rydekull_ joined #gluster
23:52 wushudoin joined #gluster
23:52 Rydekull joined #gluster
23:53 T0aD joined #gluster
23:54 Jamoflaw joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary