Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 JoeJulian gEEbusT: yes
00:22 anotheral after I stop the glusterfs service on a replicated volume, is it safe/OK to kill any leftover glusterfs/glusterfsd processes?
00:29 penglish left #gluster
00:43 cjanbanan joined #gluster
00:53 rotbeard joined #gluster
00:56 hchiramm__ joined #gluster
00:57 JoeJulian anotheral: yes
00:58 JoeJulian @processes
00:58 glusterbot JoeJulian: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
01:02 gEEbusT JoeJulian: it looks like user xattr's are working for me, but security ones aren't. i'm trying to get samba + acl_xattr working, which uses security.NTACL - whenever I try writing to this with setfattr I get an "Operation not supported error"
01:04 anotheral also JoeJulian: gluster is a fantastic tool, and I'm really grateful for teh work you're doing.
01:05 elyograg joined #gluster
01:05 elyograg I cannot get gluster nfs mounted on Solaris.  In the past we've been able to get it working by doing an NFS mount on a linux client, but that isn't helping.
01:10 elyograg http://fpaste.org/118896/14056458/
01:10 glusterbot Title: #118896 Fedora Project Pastebin (at fpaste.org)
01:11 elyograg no matter how I try to mount, I get output like this:
01:12 elyograg nfs mount: slc01nas-nc.REDACTED.com: : RPC: Program not registered
01:12 elyograg nfs mount: retrying: /mdfs
01:13 harish joined #gluster
01:31 sputnik1_ joined #gluster
01:35 rotbeard joined #gluster
01:35 sputnik1_ joined #gluster
01:39 sputnik1_ joined #gluster
01:49 _Bryan_ joined #gluster
01:54 wgao joined #gluster
02:06 haomaiwa_ joined #gluster
02:09 sputnik1_ joined #gluster
02:11 samkottler joined #gluster
02:23 nishanth joined #gluster
02:57 haomaiw__ joined #gluster
02:57 hagarth joined #gluster
03:01 coredump joined #gluster
03:03 bharata-rao joined #gluster
03:05 haomaiwa_ joined #gluster
03:19 RioS2 joined #gluster
03:31 bala joined #gluster
03:34 gildub joined #gluster
03:38 shubhendu joined #gluster
03:43 itisravi joined #gluster
03:49 atalur joined #gluster
03:55 RioS2 joined #gluster
03:56 hchiramm__ joined #gluster
04:07 RioS2 joined #gluster
04:09 RameshN joined #gluster
04:12 recidive joined #gluster
04:12 haomaiw__ joined #gluster
04:15 kanagaraj joined #gluster
04:15 spandit joined #gluster
04:15 cjanbanan joined #gluster
04:17 ndarshan joined #gluster
04:17 nishanth joined #gluster
04:18 rjoseph joined #gluster
04:20 kshlm joined #gluster
04:24 gildub Hi, I'm testing glusterfs 3.5.1 on rhel7 for cinder, that's working fine, well that's what I'm trying to find out, anyone to help?
04:26 gildub Basically I've created volumes and snapshots but I can't see the disk usage growing up on either the cinder server or the glusters servers
04:29 Rafi_kc joined #gluster
04:29 kumar joined #gluster
04:31 gildub gildub, well, df -h doesn't reflect real sizes, but inside mounted cinder volume, the volumes are there with right size.
04:31 gildub gildub, on the gluster server side though, I don't see the fs growing though, even when using glusterfs-tools with glusterdf
04:32 anoopcs joined #gluster
04:32 jiffin joined #gluster
04:33 anoopcs joined #gluster
04:38 anoopcs joined #gluster
04:43 anoopcs joined #gluster
04:44 rejy joined #gluster
04:44 ppai joined #gluster
04:46 anoopcs left #gluster
04:49 anoopcs joined #gluster
04:52 anoopcs joined #gluster
04:56 ramteid joined #gluster
04:56 jiffin left #gluster
04:57 raghu joined #gluster
05:00 atinmu joined #gluster
05:06 RameshN joined #gluster
05:07 anoopcs joined #gluster
05:11 kdhananjay joined #gluster
05:16 haomaiwa_ joined #gluster
05:20 aravindavk joined #gluster
05:22 7JTAAEF84 joined #gluster
05:22 karnan joined #gluster
05:23 RioS2 joined #gluster
05:25 prasanth_ joined #gluster
05:34 nbalachandran joined #gluster
05:38 sahina joined #gluster
05:40 hagarth joined #gluster
05:43 psharma joined #gluster
05:43 cjanbanan joined #gluster
05:53 RioS2 joined #gluster
05:53 RioS2 joined #gluster
05:56 lalatenduM joined #gluster
05:57 atalur joined #gluster
06:06 vpshastry joined #gluster
06:13 jiffin joined #gluster
06:14 rastar joined #gluster
06:15 cjanbanan joined #gluster
06:18 saurabh joined #gluster
06:21 LebedevRI joined #gluster
06:32 meghanam joined #gluster
06:40 rastar joined #gluster
06:44 xavih joined #gluster
06:50 harish__ joined #gluster
06:53 ricky-ti1 joined #gluster
07:01 hybrid512 joined #gluster
07:04 ctria joined #gluster
07:07 keytab joined #gluster
07:11 LebedevRI joined #gluster
07:14 andreask joined #gluster
07:15 cjanbanan joined #gluster
07:19 aravindavk joined #gluster
07:24 glusterbot New news from newglusterbugs: [Bug 1121014] Spurious failures observed in few of the test cases <https://bugzilla.redhat.co​m/show_bug.cgi?id=1121014>
07:32 TvL2386 joined #gluster
07:38 hybrid512 joined #gluster
07:45 rjoseph joined #gluster
07:57 vu joined #gluster
08:02 aravindavk joined #gluster
08:09 vu joined #gluster
08:16 cjanbanan joined #gluster
08:21 andreask joined #gluster
08:22 Pupeno joined #gluster
08:27 liquidat joined #gluster
08:32 Slashman joined #gluster
08:41 vu joined #gluster
08:42 anoopcs joined #gluster
08:48 rastar joined #gluster
08:54 RameshN joined #gluster
09:09 Philambdo joined #gluster
09:15 haomaiwa_ joined #gluster
09:21 ndarshan joined #gluster
09:22 karnan joined #gluster
09:22 sahina joined #gluster
09:23 shubhendu joined #gluster
09:23 nishanth joined #gluster
09:32 R0ok_ joined #gluster
09:33 giannello joined #gluster
09:34 glustu joined #gluster
09:34 glustu hi
09:34 glusterbot glustu: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:34 glustu will gluster support dynamic stripe count
09:35 wjp left #gluster
09:35 glustu basically can I change stripe as I add subvolume/brick to volume
09:36 ppai joined #gluster
09:36 LebedevRI joined #gluster
09:40 aravindavk joined #gluster
09:43 eseyman joined #gluster
09:48 ekuric joined #gluster
09:49 glustu can I change stripe count as I add subvolume/brick to volume
09:49 glustu ?
09:49 glustu help will be appreciated
09:56 rjoseph joined #gluster
09:58 xavih joined #gluster
10:08 stickyboy No. of entries healed: 10415340
10:08 stickyboy Yay?
10:08 stickyboy Though sometimes I wonder if it will ever finish...
10:14 haomaiw__ joined #gluster
10:15 JoeJulian ~stripe | glustu
10:15 glusterbot glustu: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
10:19 cjanbanan joined #gluster
10:24 glusterbot New news from newglusterbugs: [Bug 1100262] info file missing from /var/lib/glusterd/vols/ . Causes crash <https://bugzilla.redhat.co​m/show_bug.cgi?id=1100262> || [Bug 1121072] [Dist-geo-rep] : In a cascaded setup, after hardlink sync, slave level 2 volume has sticky bit files found on mount-point. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1121072>
10:25 sputnik13 joined #gluster
10:35 shubhendu joined #gluster
10:35 nishanth joined #gluster
10:36 ndarshan joined #gluster
10:40 xavih joined #gluster
10:40 cookednoodles joined #gluster
10:45 cjanbanan joined #gluster
10:46 glustu left #gluster
10:47 andreask left #gluster
10:53 hagarth joined #gluster
11:16 cjanbanan joined #gluster
11:26 gildub joined #gluster
11:32 diegows joined #gluster
11:43 morse joined #gluster
11:46 rjoseph joined #gluster
11:49 deepakcs joined #gluster
11:54 bharata-rao joined #gluster
12:00 andreask joined #gluster
12:01 Slashman_ joined #gluster
12:06 andreask joined #gluster
12:17 edward1 joined #gluster
12:29 chirino joined #gluster
12:30 bene2 joined #gluster
12:30 harish__ joined #gluster
12:41 Pupeno_ joined #gluster
12:46 xavih joined #gluster
12:52 eclectic joined #gluster
12:52 kanagaraj joined #gluster
13:14 bit4man joined #gluster
13:24 xavih joined #gluster
13:32 tdasilva joined #gluster
13:33 sahina joined #gluster
13:41 plarsen joined #gluster
13:47 atalur joined #gluster
13:52 chirino joined #gluster
14:00 nueces joined #gluster
14:09 wushudoin joined #gluster
14:12 mortuar joined #gluster
14:19 recidive joined #gluster
14:33 Pupeno joined #gluster
14:34 TheSov joined #gluster
14:34 cjanbanan joined #gluster
14:43 rotbeard joined #gluster
14:43 cjanbanan joined #gluster
15:00 hagarth joined #gluster
15:01 chirino joined #gluster
15:15 lmickh joined #gluster
15:16 jobewan joined #gluster
15:16 cjanbanan joined #gluster
15:29 vu joined #gluster
15:44 nbalachandran joined #gluster
15:51 MacWinner joined #gluster
15:58 ndk joined #gluster
16:00 vu joined #gluster
16:01 andreask joined #gluster
16:02 daMaestro joined #gluster
16:06 psharma joined #gluster
16:11 harish__ joined #gluster
16:13 anoopcs joined #gluster
16:20 daMaestro joined #gluster
16:27 Kakila joined #gluster
16:28 Kakila Hi! I have this apparetly recurrent problem with gluterFS 3.4.2 (server) and glusterFS 3.3 (client). I can mount the volume on one server but I cannot on the others.
16:29 Kakila I have set auth.allow to allow all the IPs of the other servers
16:32 nage joined #gluster
16:34 japuzzo joined #gluster
16:48 vimal joined #gluster
16:54 plarsen joined #gluster
16:56 Kakila her eis the log file
16:56 Kakila http://paste.debian.net/110427/
16:56 glusterbot Title: debian Pastezone (at paste.debian.net)
16:57 bit4man joined #gluster
16:58 Peter3 joined #gluster
16:58 Kakila anyone has an idea?
16:59 Kakila I can mount it if I go to a machine that is not a server in the volume
17:00 vpshastry joined #gluster
17:02 cultavix joined #gluster
17:03 sijis left #gluster
17:07 anoopcs joined #gluster
17:16 Peter3 i always getting lock message when multiple volume quota list run. any way to prevent that?
17:21 bit4man joined #gluster
17:27 plarsen joined #gluster
17:28 sputnik1_ joined #gluster
17:47 Kakila I can mount from any computer that is not in the trusted pool and is not the machine where I run the peer probe command
18:20 nueces joined #gluster
18:28 edwardm61 joined #gluster
18:29 Kakila joined #gluster
18:36 samppah joined #gluster
18:42 Kakila any idea?
18:45 mkent joined #gluster
18:57 hagarth joined #gluster
19:02 cultavix joined #gluster
19:08 stickyboy joined #gluster
19:09 zerick joined #gluster
19:11 diegows joined #gluster
19:30 Kakila Hi! I have this apparetly recurrent problem with gluterFS 3.4.2 (server) and glusterFS 3.3 (client). I can mount the volume on one server but I cannot on the others.
19:30 Kakila I have set auth.allow to allow all the IPs of the other servers
19:30 Kakila logs: http://paste.debian.net/110427/
19:30 glusterbot Title: debian Pastezone (at paste.debian.net)
19:36 coredump joined #gluster
19:40 ndk joined #gluster
19:41 ysh joined #gluster
19:46 diegows joined #gluster
19:50 ysh read up an blog post on gluster about uses it instead of nfs while loadbalance website. does it have any advantages over nfs in such an enviroment?
19:55 wushudoin joined #gluster
19:55 eclectic joined #gluster
19:55 Peter1 joined #gluster
19:55 mortuar joined #gluster
19:55 japuzzo joined #gluster
19:55 nage joined #gluster
19:56 elico joined #gluster
19:56 chirino joined #gluster
19:57 jobewan joined #gluster
20:06 jrcresawn joined #gluster
20:13 cultavix joined #gluster
20:16 JoeJulian ~pasteinfo | Kakila
20:17 glusterbot Kakila: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:17 Kakila JoeJulian: ok
20:19 Kakila JoeJulian: http://fpaste.org/119148/
20:19 glusterbot Title: #119148 Fedora Project Pastebin (at fpaste.org)
20:19 Kakila all clsnn* machines have IPs 157.193.*
20:20 JoeJulian Odd. Are you sure all the servers are running the same version?
20:21 Kakila yes, they were all installed from Ubuntu 14.04 packages
20:21 Kakila thing is
20:21 JoeJulian I thought they were 3.4.2?
20:21 Kakila from any machine in 157.193.* that is not one in the trusted pool, I can mount
20:22 Kakila yes, that is what I get when I do glusterfs --version
20:23 JoeJulian Maybe you should just make sure they're all on the same version from the ,,(ppa). Last I knew Ubuntu was shipping 3.2.
20:23 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
20:24 Kakila I can ranversion in al the machines. I will give you output
20:29 Kakila JoeJulian: glusterfs 3.4.2 built on Jan 14 2014 18:05:35
20:29 Kakila in everyone of them
20:33 Kakila the point is that everything works fine except when mounting in the servers, except thepiratebay (where peer probe was run) that works fine too. The logs I pasted before show an authentication issue
20:42 semiosis JoeJulian: trusty has 3.4.2 in universe
20:43 harish__ joined #gluster
20:44 semiosis ,,(brick naming)
20:44 glusterbot http://gluster.helpshiftcrm.com/q/how-shoul​d-i-name-bricks-what-does-server-ext1-mean/
20:45 sputnik1_ joined #gluster
20:47 JoeJulian yeah, I saw that auth issue. I've never seen that before.
20:50 JoeJulian I'm about to start a 6 hour drive, so I don't think I'll have time to dig through the source and figure that out today.
20:57 Peter1 any idea why i got this error 0-cli: Failed to get quota limits for abd01245-73e1-4ef6-aba6-dc087cf0bccd
20:57 Peter1 and what would be abd01245-73e1-4ef6-aba6-dc087cf0bccd ??
20:57 Peter1 it matches none of the volume ID
21:00 mAd-1 Hi folks, anyone running 3.5.1 on Solaris 10?
21:02 Kakila joined #gluster
21:04 Kakila JoeJulian: ok, no problems. Shall I commet in some forum in case anybody wants to check what is th eissue?
21:25 Kakila a simple question. The permissions to write to the volume (for example by group users) when shall be set?
21:25 Kakila and how
21:26 Peter1 how do i know who is trying to acesss a file or directory that no longer exist?
21:26 Peter1 http://pastie.org/9403564
21:26 glusterbot Title: #9403564 - Pastie (at pastie.org)
21:27 bala joined #gluster
21:29 bala1 joined #gluster
21:31 semiosis Peter1: lsof?
21:31 semiosis well, never mind
21:31 Peter1 huh?
21:31 Peter1 the problem is if the file already gone?
21:31 semiosis right
21:31 Peter1 and how do i translate the gfid to the file that is trying to get access?
21:32 semiosis if the file exists, you can use the ,,(gfid resolver)
21:32 glusterbot https://gist.github.com/4392640
21:32 semiosis but if it's gone, it's gone
21:32 semiosis pretty sure gfid are random
21:32 semiosis randomish
21:33 Peter1 i m SOL again?
21:33 Peter1 the error message keep popping out….:(
21:33 Peter1 how can i tell which remote client trying to access?
21:33 Peter1 i only have NFS
21:35 Peter1 ./gfid-resolver.sh /brick02 abd01245-73e1-4ef6-aba6-dc087cf0bccd
21:35 Peter1 abd01245-73e1-4ef6-aba6-dc087cf0bccd==File:ls: cannot access /brick02/.glusterfs/ab/d0/abd012​45-73e1-4ef6-aba6-dc087cf0bccd: No such file or directory
21:35 Peter1 find: invalid argument `!' to `-inum'
21:57 Peter5 joined #gluster
22:02 n0de joined #gluster
22:02 sputnik1_ joined #gluster
22:06 recidive joined #gluster
22:14 MacWinner joined #gluster
22:17 jtdavies joined #gluster
22:24 siel joined #gluster
22:28 bala joined #gluster
22:44 bala joined #gluster
23:11 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary