Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 yinyin joined #gluster
00:14 dustint joined #gluster
01:04 nightwalk joined #gluster
01:10 majeff joined #gluster
01:26 alexturner joined #gluster
01:26 alexturner Howdy al
01:26 alexturner l
01:27 robos joined #gluster
01:43 atrius_ joined #gluster
02:06 zwu joined #gluster
02:15 alexturner I'm getting this consistently on all my machines and i feel like i've tried everything
02:15 alexturner [2013-05-20 06:14:51.381516] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.2.12.1:956)
02:15 glusterbot alexturner: That's just a spurious message which can be safely ignored.
02:16 alexturner Haha ^ clever. Though I'm unable to account any mount
02:17 alexturner http://hastebin.com/bufetewaho.hs
02:17 glusterbot Title: hastebin (at hastebin.com)
02:19 harish joined #gluster
02:54 bharata joined #gluster
02:55 LLckfan joined #gluster
02:55 LLckfan Does any1 know how to stop Shockwave flash from crashing? I have uninstalled both Flash and my browser (Chrome), installed both from a fresh download, and scanned my computer (come up clean). Everything is updated
03:23 MrNaviPacho joined #gluster
03:41 majeff joined #gluster
03:42 edong23 joined #gluster
03:44 kshlm joined #gluster
04:05 shylesh joined #gluster
04:11 majeff joined #gluster
04:14 edong23 joined #gluster
04:17 sgowda joined #gluster
04:30 saurabh joined #gluster
04:32 anands joined #gluster
04:45 ngoswami joined #gluster
04:46 hagarth joined #gluster
04:50 aravindavk joined #gluster
04:54 vpshastry joined #gluster
04:55 deepakcs joined #gluster
05:11 vimal joined #gluster
05:29 mohankumar joined #gluster
05:38 rastar joined #gluster
05:41 bulde joined #gluster
05:42 jclift_ joined #gluster
05:44 guigui3 joined #gluster
05:58 bala joined #gluster
06:02 vshankar joined #gluster
06:11 lalatenduM joined #gluster
06:11 glusterbot New news from newglusterbugs: [Bug 961856] [FEAT] Add Glupy, a python bindings meta xlator, to GlusterFS project <http://goo.gl/yCNTu> || [Bug 962226] 'prove' tests failures <http://goo.gl/J2qCz> || [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
06:14 yinyin joined #gluster
06:20 raghu joined #gluster
06:22 jclift_ Hmmm, I'm definitely going to need to backport that to the external Glupy project then
06:29 bala joined #gluster
06:29 majeff joined #gluster
06:31 ricky-ticky joined #gluster
06:36 rgustafs joined #gluster
06:36 satheesh joined #gluster
06:38 ollivera_ joined #gluster
06:55 satheesh joined #gluster
07:00 ekuric joined #gluster
07:07 vpshastry joined #gluster
07:10 sgowda joined #gluster
07:13 ctria joined #gluster
07:25 alexturner joined #gluster
07:33 rotbeard joined #gluster
07:35 andrei__ joined #gluster
07:36 m0zes joined #gluster
07:37 hybrid5121 joined #gluster
07:43 glusterbot New news from newglusterbugs: [Bug 959887] clang static src analysis of glusterfs <http://goo.gl/gf6Vy> || [Bug 961856] [FEAT] Add Glupy, a python bindings meta xlator, to GlusterFS project <http://goo.gl/yCNTu> || [Bug 962226] 'prove' tests failures <http://goo.gl/J2qCz> || [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
07:48 vpshastry1 joined #gluster
07:49 badone joined #gluster
07:52 sgowda joined #gluster
08:03 andrei__ joined #gluster
08:17 dobber_ joined #gluster
08:28 Guest79483 joined #gluster
08:31 Oneiroi joined #gluster
08:45 ricky-ticky joined #gluster
08:49 anands joined #gluster
08:52 nickw joined #gluster
09:03 majeff joined #gluster
09:05 kshlm joined #gluster
09:07 rgustafs joined #gluster
09:08 ngoswami joined #gluster
09:10 jbrooks joined #gluster
09:14 morse joined #gluster
09:16 hybrid512 joined #gluster
09:19 rgustafs joined #gluster
09:28 bulde joined #gluster
09:28 guigui1 joined #gluster
09:30 bulde1 joined #gluster
09:32 rgustafs joined #gluster
09:36 codex joined #gluster
09:38 andrei__ joined #gluster
09:44 rgustafs joined #gluster
09:57 kshlm joined #gluster
10:06 hagarth joined #gluster
10:07 harish joined #gluster
10:12 bulde joined #gluster
10:14 ekuric joined #gluster
10:23 manik joined #gluster
10:27 xavih joined #gluster
10:33 spider_fingers joined #gluster
10:44 glusterbot New news from newglusterbugs: [Bug 960141] NFS no longer responds, get "Reply submission failed" errors <http://goo.gl/RpzTG> || [Bug 965025] Out-of-date options used in help of nfs.addr-name-lookup option <http://goo.gl/rNCOU>
10:45 sgowda joined #gluster
10:46 Chr1z joined #gluster
10:49 Chr1z If I'm wanting to setup gluster on 2 servers, how would I set it up so that if one of the servers goes down or reboots, data is still in tact and available ?  Also… say I have 2 servers running… at 192.168.0.1 and .2 if a gluster client is connected to say 192.168.0.1 and it reboots, does that client (assuming mount.glusterfs not NFS), does it lose connection and freeze or reconnect automatically to the remaining nodes in the cluster?
10:52 edward1 joined #gluster
10:56 badone joined #gluster
11:16 anands joined #gluster
11:24 tshm joined #gluster
11:25 hagarth joined #gluster
11:36 rwheeler joined #gluster
12:03 ctria joined #gluster
12:11 chirino joined #gluster
12:11 vpshastry joined #gluster
12:12 vpshastry left #gluster
12:12 vshankar joined #gluster
12:13 GabrieleV joined #gluster
12:14 vimal joined #gluster
12:20 piotrektt_ joined #gluster
12:27 aliguori joined #gluster
12:37 vimal joined #gluster
12:42 vshankar joined #gluster
12:48 dustint joined #gluster
12:50 guigui1 joined #gluster
12:50 alexturner joined #gluster
12:55 rastar joined #gluster
12:59 balunasj joined #gluster
13:02 dblack joined #gluster
13:07 ctria joined #gluster
13:08 robos joined #gluster
13:11 majeff joined #gluster
13:15 glusterbot New news from newglusterbugs: [Bug 959887] clang static src analysis of glusterfs <http://goo.gl/gf6Vy> || [Bug 962226] 'prove' tests failures <http://goo.gl/J2qCz> || [Bug 960141] NFS no longer responds, get "Reply submission failed" errors <http://goo.gl/RpzTG> || [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
13:15 duerF joined #gluster
13:30 andrewjsledge joined #gluster
13:32 bennyturns joined #gluster
13:38 rb2k joined #gluster
13:40 rb2k hm, is it still true there there is no replicated volumes with "replica 1" ?
13:42 kkeithley you need  "replica 2" to have replication
13:44 rb2k oh sorry
13:44 rb2k that's the allergy meds talking
13:44 rb2k no, I basically want a fully replicated volume
13:44 rb2k as in "each server gets the same copy"
13:44 rb2k so n = the number of servers I have
13:44 kkeithley "replica X", X==$number_of_copies
13:44 yongtaof joined #gluster
13:44 rb2k Does gluster support adding / removing servers from that constellation
13:45 kkeithley yes
13:45 rb2k e.g. I have 3 bricks backing that volume
13:45 rb2k so I created it with replica 3
13:45 rb2k and now I want to add one
13:45 rb2k so I'd end up with 4 bricks backing it
13:45 Keawman joined #gluster
13:45 yongtaof Dear glusterfs experts? What's the best HA solution for glusterfs nfs client?
13:45 rb2k gluster volume add-brick wouldn't change the replica factor. So how would I tell gluster to change it?
13:46 kkeithley you'd either have to go to "replica 4" or "replica 2".  with four bricks "replica 2" will give you a 2×2 distribute-replicate volume.
13:46 kkeithley remove one of the bricks from the volume first
13:47 yongtaof with more replica you'll get lower write performance
13:47 rb2k I need always need a FULL replication over ALL bricks
13:47 Keawman yongtaof, I have read that as well
13:47 rb2k Performance is secondary
13:47 yongtaof ok
13:48 yongtaof but I think 3 is enough
13:48 rb2k nope :)
13:48 rb2k kkeithley: how would I be able to change the factor though?
13:48 rb2k e.g. the path from a "replica 3" volume with 3 bricks to a "replica 4" volume with 4 bricks
13:48 yongtaof any body help to tell what's the best HA solution for glusterfs nfs client?
13:49 Keawman yongtaof, maybe pacemaker/corosync?
13:50 kkeithley gluster volume add-brick replica 4 $path-to-new-brick
13:50 Keawman I'm currently using that for KVM HA
13:50 rb2k kkeithley: oh, so add brick will be able to change the replica count of the volume?
13:50 yongtaof ok is ucarp or lvs suitable for nfs client?
13:51 rwheeler joined #gluster
13:51 kkeithley with glusters-3.3.x.  IIRC that wasn't the case with 3.2.x, but someone will correct me if I'm wrong. ;-)
13:51 lpabon joined #gluster
13:52 rb2k that's fine, I'm moving to 3.3.x
13:52 rb2k :)
13:52 MrNaviPacho joined #gluster
13:59 vshankar joined #gluster
14:00 rb2k kkeithley: weird, I don't se any mention of the replica options for add-bricks in http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
14:00 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
14:00 rb2k is that new?
14:02 rb2k it also isn't mentioned on http://gluster.org/community/documentation​/index.php/Gluster_3.1:_Expanding_Volumes
14:02 glusterbot <http://goo.gl/XNLfR> (at gluster.org)
14:05 kkeithley % glusterd volume help ...
14:05 kkeithley volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ...
14:05 kkeithley that's 3.3.1
14:05 rb2k ok, in that case the documentation is just outdated
14:05 kkeithley yup
14:05 rb2k thanks
14:06 rb2k (still waiting for my ec2 instances to launch)
14:06 kkeithley gah, I keep typing glusterd
14:08 portante|ltp joined #gluster
14:12 kkeithley It ought to be in the 3.3.0 admin guide and should not be in the 3.1(.x) docs.
14:17 jthorne joined #gluster
14:25 majeff joined #gluster
14:27 majeff1 joined #gluster
14:28 rb2k Is it possible to downsize a replicated volume to just 1 node (so non replicated)
14:28 rb2k or should I just leave it replicated and at 2 nodes
14:28 rb2k just that one of them won't every show up
14:29 majeff2 joined #gluster
14:30 lbalbalba joined #gluster
14:30 bugs_ joined #gluster
14:33 lpabon_ joined #gluster
14:34 bfoster joined #gluster
14:35 lpabon_ joined #gluster
14:38 lpabon joined #gluster
14:39 majeff joined #gluster
14:42 spider_fingers left #gluster
14:47 maple joined #gluster
14:50 saurabh joined #gluster
14:52 manik joined #gluster
14:56 ricky-ticky joined #gluster
15:02 daMaestro joined #gluster
15:02 semiosis JoeJulian: pong
15:03 sprachgenerator joined #gluster
15:05 linwt_ joined #gluster
15:07 isomorphic joined #gluster
15:10 portante|ltp joined #gluster
15:20 rb2k Can I convert a replicated volume (2 bricks) into a non replicated one (1 brick)?
15:21 ricky-ticky joined #gluster
15:21 anands joined #gluster
15:21 daMaestro joined #gluster
15:23 tshm You should be able to detach one of your replicas by just removing the replication translator.
15:23 tshm and one of the bricks, obviously
15:24 kkeithley I'm pretty sure that "gluster volume delete-brick $volname replica 1 $path-to-brick" should work. It makes sense (to me) that it should work.
15:27 kkeithley s/delete-brick/remove-brick/
15:28 glusterbot kkeithley: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
15:50 rb2k kkeithley: ok, this DOES seem to work
15:50 rb2k BUT that brick will never disappear
15:50 rb2k I remove it, I rebalance
15:50 rb2k and "volume info" still shows it
15:50 rb2k if I use "force" instead of "start" on remove-brick, it will delete the brick
15:51 rb2k BUT it will show an interactive prompt
15:51 rb2k which I can't have since this is all 100% automated
15:52 kkeithley That kinda sounds like a bug to me — worth filing a bug-report for.
15:52 kkeithley @bugs
15:52 kkeithley c'mon glusterbot I know you're there
15:52 kkeithley @bugzilla
15:52 paratai joined #gluster
15:58 rb2k :)
15:58 rb2k I'm double checking to make sure
16:05 portante` joined #gluster
16:18 Keawman does anyone know why glusterfs-fuse 3.4.0-0.4beta1 also installs fuse 2.8.3-4 as a dependancy when previous version osf glusterfs did not?
16:22 kkeithley because someone filed a bug-report and we "fixed" it.
16:23 semiosis kkeithley: file a bug
16:23 glusterbot http://goo.gl/UUuCq
16:23 semiosis ^^
16:30 vpshastry joined #gluster
16:30 vpshastry left #gluster
16:30 gmcwhistler joined #gluster
16:34 Mo___ joined #gluster
16:37 rb2k when removing a brick
16:37 rb2k what does "commit" do?
16:37 rb2k I have to start it first and then commit it later?
16:37 devoid joined #gluster
16:39 semiosis rb2k: why would you want to turn off replication and leave a 1-brick volume in the first place?
16:39 rb2k I don't, but we have that kind of setup for some staging sites
16:39 rb2k But I think I might just have been missing the "commit" option
16:39 semiosis perhaps
16:40 rb2k remove-brick […] start ===> loop in remove-brick […] status ====> remove-brick […] commit ?
16:40 rb2k is that the way to do it?
16:40 semiosis remove-brick originally was to change the number of distribution subvolumes, then gluster overloaded that command to also change the replica count, which imho was a bad move... confusing two already sophisticated operations
16:40 semiosis people have been getting confused by that ever since
16:41 rb2k I certainly am
16:41 semiosis yes thats the way to do it, at least it was when remove-brick was for distribution changes
16:41 semiosis probably the ssame for replicaiton changes
16:41 duerF joined #gluster
16:42 rb2k ah, ok
16:42 rb2k that makes sense
16:42 rb2k well, almost
16:42 rb2k I'd rather not have to loop manually, but I can get that to work :)
16:42 rb2k oh wait. does commit require interactive input?
16:44 semiosis idk
16:44 semiosis maybe you can force it
16:47 rb2k hmm, do I have to check the status?
16:48 rb2k seeing as it's replicated, there is no data migration going on
16:48 semiosis makes sense
16:50 vpshastry joined #gluster
16:52 lbalbalba joined #gluster
17:00 hchiramm__ joined #gluster
17:03 rb2k ok, so commit and force seem to require interactive input
17:03 rb2k how does anyone use this in production
17:07 bfoster joined #gluster
17:11 semiosis rb2k: --xml
17:11 rb2k isn't in 3.3.1 yet, is it?
17:11 semiosis hm
17:11 rb2k I saw that, but it looked like it was a 3.4 patch
17:12 semiosis pretty sure it was in 3.3
17:12 semiosis you could try it & see
17:12 semiosis gluster volume info --xml
17:12 vpshastry joined #gluster
17:16 rb2k waiting for 2 instances to boot :)
17:17 rb2k but that would be sweet
17:17 rb2k still wouldn't change the problem with the interactive input I suppose
17:25 vpshastry left #gluster
17:27 rb2k semiosis: nope, doesn't work
17:31 rb2k (glusterfs 3.3.2qa1 )
17:34 rb2k ok, so the "commit" command results in this
17:34 rb2k "Removing brick(s) can result in data loss. Do you want to Continue? (y/n) "
17:36 kkeithley 3.3.2qa1 built from source or installed from rpms from bits.gluster.org?
17:37 cfeller joined #gluster
17:39 kaptk2 joined #gluster
17:40 majeff1 joined #gluster
17:40 rb2k build from source
17:40 rb2k we needed deb files
17:40 rb2k for the qa branch
17:41 rb2k ok, so I need the "commit" for removing a brick from a volume
17:41 rb2k now I've gotta work around the fact that commit or force both require user input
17:41 rb2k *sigh*
17:42 rb2k and replace-brick has an optional "force" parameter
17:42 zaitcev joined #gluster
17:42 rb2k so does rebalance and reset
17:42 rb2k remove brick however does not
17:43 kkeithley when building from source make sure you have the libxml2-devel (or the debian/ubuntu equivalent) installed. The configure script is kinda stupid about it, if you don't have it installed you won't get any of the --xml and it won't warn you about it.
17:44 kkeithley something else we ought to fix
17:44 rb2k kkeithley: oh, that's a good comment
17:45 portante|ltp joined #gluster
17:45 rb2k kkeithley: you don't happen to have any information on the manual 'y' confirmation for commit when removing bricks?
17:45 majeff joined #gluster
17:46 rb2k although, I did apt-get build-dep -y glusterfs
17:46 rb2k but maybe that wasn't in there back in the day
17:46 kkeithley (For Fedora the .spec file used to build the rpms has a BuildRequires to ensure it's there.)
17:46 kkeithley no, don't know about the manual 'y' confirmation
17:47 majeff1 joined #gluster
17:47 vpshastry joined #gluster
17:47 vpshastry left #gluster
17:47 rb2k kkeithley: ok, the configure script seems to be happy
17:47 rb2k checking for LIBXML2... yes
17:47 rb2k 'yes (features requiring libxml2 enabled)'
17:48 rb2k but I think --xml was in 3.4
17:50 kkeithley %rpm -q glusterfs
17:50 kkeithley glusterfs-3.3.1-14.fc18.x86_64
17:50 kkeithley % gluster volume info --xml
17:50 kkeithley <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
17:50 kkeithley <cliOutput><opRet>0</opRet><opErrno>0</opErrno><o​pErrstr></opErrstr><volInfo><volumes><volume><nam​e>volXX</name><id>6156e673-daf6-4f9b-af0a-f2059c4​2c119</id><type>0</type><status>1</status><brickC​ount>1</brickCount><distCount>1</distCount><strip​eCount>1</stripeCount><replicaCount>1</replicaCou​nt><transport>0</transport><bricks><brick>f18node​1:/var/tmp/bricks/volX/X</brick></bricks><optCoun​t>1</optCount><options><option><name>diagnostics.
17:50 kkeithley it's in 3.3.1
17:51 rb2k O_o
17:51 kkeithley if it's not in 3.3.2qaX then there's been a regression somehow
17:52 rb2k is it maybe a separate configure switch?
17:53 rb2k there are a few warnings
17:53 rb2k ../../rpc/xdr/src/cli1-xdr.h:25: warning: unknown option after '#pragma GCC diagnostic' kind
17:53 rb2k but that seems fine
17:53 kkeithley those are benign
17:56 hchiramm__ joined #gluster
17:58 ralfonso joined #gluster
17:58 ralfonso hello. I restarted a server and now when clients attempt to connect, I'm receiving: no authentication module is interested in accepting remote-client (null)
17:59 ralfonso server ver: 3.2.5
17:59 chirino joined #gluster
18:06 al joined #gluster
18:14 rwheeler joined #gluster
18:35 majeff joined #gluster
18:42 semiosis ralfonso: ,,(pasteinfo)
18:42 glusterbot ralfonso: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:43 ralfonso semiosis: I _just_ solved it. my auth.allow was incorrectly formatted, which didn't appear until glusterfsd was started on cold boot
18:43 ralfonso thanks
18:43 semiosis yw, glad to hear you worked it out
18:46 ralfonso left #gluster
18:52 ctria joined #gluster
18:59 andrei__ joined #gluster
19:02 flrichar joined #gluster
19:05 larsks joined #gluster
19:12 lpabon joined #gluster
19:13 thomaslee joined #gluster
19:14 devoid joined #gluster
19:14 al joined #gluster
19:17 Keawman semiosis, would you happen to know why fuse is a dependancy in 3.4beta1 but not in 3.4alpha?
19:18 Keawman it seems to be causing issues with libvirt and direct i/o cache=none
19:19 semiosis ???
19:20 Keawman semiosis, standard fuse package not glusterfs-fuse
19:20 semiosis afaik fuse has been a requirement for a while already
19:21 Keawman on one test system i have glusterfs-fuse-3.4.0alpha-2.el6.x86_64 only and it works fine with libvirt...on another it required fuse and anything libvirt does it drops the gluster mount
19:22 Keawman so one has glusterfs-fuse-3.4.0-0.4.beta1.el6.x86_64
19:22 Keawman fuse-2.8.3-4.el6.x86_64
19:23 Keawman and the other has glusterfs-fuse-3.4.0alpha-2.el6.x86_64
19:24 Keawman and the other only has the gluster-fuse package
19:24 Keawman oops
19:24 Keawman sorry for double post didn't see
19:26 kkeithley BZ 947830, it's to install the kernel fuse module if it's not already installed.
19:26 * kkeithley answered this already
19:26 kkeithley feel free to add to the BZ
19:29 Keawman kkeithley, should i add that it works fine without fuse
19:29 kkeithley yes,
19:30 Keawman ok thanks for comfirming my issues
19:40 mrfsl joined #gluster
19:41 mrfsl Got an issue with a cluster I inherited.
19:41 mrfsl gluster version 3.2.7-1
19:41 mrfsl 3 peers with one distributed volume
19:42 mrfsl one peer went down due to an OS crash. When it came back online the files on that peer are no longer present to the cluster
19:42 mrfsl How do I resolve this?
19:42 majeff1 joined #gluster
19:43 lbalbalba backups ?
19:44 mrfsl no
19:44 semiosis mrfsl: ,,(pasteinfo)
19:44 glusterbot mrfsl: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:47 mrfsl http://fpaste.org/13269/13690792/
19:47 glusterbot Title: #13269 Fedora Project Pastebin (at fpaste.org)
19:52 mrfsl In testing in the lab with the latest version of gluster I created files on the nodes which were then not present in the cluster. Running a 'find' or 'ls' on the mounted cluster seemed to resolve this and pull in the files.
19:52 mrfsl Is this the correct course of action in my case?
19:53 Guest79483 joined #gluster
19:54 semiosis mrfsl: unlikely
19:54 semiosis but seems like it couldnt hurt to try
19:55 mrfsl its a large amount of data to iterate over. Is there a more correct course of action to resolve "files present on peer but not present on the cluster?"
19:55 semiosis gluster volume start $vol force
19:56 semiosis will attempt to start any missing glusterfsd ,,(processes)
19:56 glusterbot the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
19:56 devoid left #gluster
19:56 semiosis then you'll have to go to each server, and verify that all the glusterfsd processes are running, and inspect the log file of any that are not
19:56 semiosis once you have confirmed they are all running, then go to your clients, check to make sure they are each connected to all bricks
19:57 semiosis log files again will help
19:57 mrfsl I have checked the logs and processes - they are all running on all peers -  I will check the client
20:00 lbalbalba glusterbot point you to . See http://goo.gl/hJBvL for  which results in 404 page not found
20:00 larsks joined #gluster
20:19 semiosis @forget processes
20:19 LLckfan joined #gluster
20:19 glusterbot semiosis: The operation succeeded.
20:19 semiosis @learn processes as The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
20:19 glusterbot semiosis: The operation succeeded.
20:21 lbalbalba cool thx
20:28 mrfsl left #gluster
20:32 lbalbalba question: at what point *.vol files get comitted to disk ? im running sed over all vol files in a loop after creating volumes, but the files dont always (cant reproduce exactly) get modified every time. i tried stopping starting glusterd service, start/stop tyhe voluem, and i cant figure it out. even tried sync to commit stuff to disk
20:33 lbalbalba after each volume create, i run this :  find /var/lib/glusterd/ -name \*.vol | xargs sed -i 's/.*type protocol.*/&\n    option transport.socket.own-thread on/'
20:36 lbalbalba im adding ' transport.socket.own-thread on' pn a new line after each 'type protocol' line
20:43 lbalbalba tryin to run the 'prove' test suite, with 'option transport.socket.own-thread on' for all the volumes
20:44 premera joined #gluster
20:48 badone joined #gluster
20:58 LLckfan left #gluster
21:33 kaptk2 joined #gluster
21:49 ctria joined #gluster
22:04 Guest79483 joined #gluster
22:04 rb2k joined #gluster
22:17 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
22:28 piotrektt_ joined #gluster
22:35 sprachgenerator joined #gluster
22:37 Guest79483 joined #gluster
23:47 glusterbot New news from newglusterbugs: [Bug 963223] Re-inserting a server in a v3.3.2qa2 distributed-replicate volume DOSes the volume <http://goo.gl/LqgL8>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary