Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 forest joined #gluster
00:03 jiffe1 joined #gluster
00:06 masterzen joined #gluster
00:12 forest joined #gluster
00:15 forest joined #gluster
00:23 forest joined #gluster
00:59 NeonLicht joined #gluster
01:13 Norky joined #gluster
01:13 Deformative joined #gluster
01:13 Deformative Can glusterfs be used as / on diskless nodes?
01:13 Deformative I currently use nfs.
01:13 Deformative And it is pretty annoying.
01:21 Norky joined #gluster
01:26 foxban joined #gluster
01:42 forest joined #gluster
01:42 kevein joined #gluster
01:45 forest joined #gluster
01:58 satheesh joined #gluster
02:01 jag3773 joined #gluster
02:03 foxban_ joined #gluster
02:09 ccha joined #gluster
02:14 bala joined #gluster
02:25 foxban_ Could the replica count be changed after the volume has been created?
02:26 foxban_ googled around and find nothing usable...
02:27 avati joined #gluster
02:29 foxban_ http://www.gluster.org/community/d​ocumentation/index.php/WhatsNew3.3
02:30 glusterbot <http://goo.gl/V8umy> (at www.gluster.org)
02:39 forest joined #gluster
02:45 lpabon joined #gluster
03:02 bharata joined #gluster
03:09 glusterbot New news from newglusterbugs: [Bug 974886] timestamps of brick1 and brick2 is not the same. <http://goo.gl/11ZiR>
03:23 mohankumar__ joined #gluster
03:33 Deformative joined #gluster
03:41 mooperd joined #gluster
03:44 Deformative Hi.
03:44 glusterbot Deformative: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
03:44 Deformative I have an asymmetric hpc cluster which I plan to use glusterfs for the /home of.
03:45 Deformative All the nodes have disks, but many are not of the same size, and some have 2 disks, etc.
03:45 Deformative The machines are also diskless booting from an nfs root.
03:45 Deformative So they all have the same os.
03:45 Deformative Is there a way for me to make htem automatically detect their disks and add themselves to the gluster pool?
04:03 saurabh joined #gluster
04:44 CheRi joined #gluster
04:53 shireesh joined #gluster
04:55 mooperd joined #gluster
04:57 hagarth joined #gluster
05:06 vpshastry1 joined #gluster
05:10 CheRi joined #gluster
05:11 aravindavk joined #gluster
05:13 sgowda joined #gluster
05:31 hchiramm_ joined #gluster
05:36 kshlm joined #gluster
05:37 vpshastry joined #gluster
05:38 aravindavk joined #gluster
05:39 vrturbo joined #gluster
05:50 hchiramm_ joined #gluster
05:53 CheRi joined #gluster
05:57 sgowda joined #gluster
06:00 hchiramm_ joined #gluster
06:05 vpshastry1 joined #gluster
06:06 vimal joined #gluster
06:07 raghu joined #gluster
06:21 bala joined #gluster
06:22 hchiramm__ joined #gluster
06:23 bulde joined #gluster
06:24 jtux joined #gluster
06:29 vpshastry1 joined #gluster
06:29 CheRi joined #gluster
06:30 vshankar joined #gluster
06:30 lalatenduM joined #gluster
06:31 aravindavk joined #gluster
06:33 bala joined #gluster
06:34 bulde joined #gluster
06:35 ctria joined #gluster
06:35 ricky-ticky joined #gluster
06:36 ramkrsna joined #gluster
06:36 ramkrsna joined #gluster
06:40 hchiramm_ joined #gluster
06:43 jtux joined #gluster
06:43 ollivera joined #gluster
06:45 vshankar joined #gluster
06:53 vpshastry1 joined #gluster
06:57 ngoswami joined #gluster
06:57 psharma joined #gluster
07:03 vshankar joined #gluster
07:03 CheRi joined #gluster
07:04 _ndevos Deformative: I think you're looking for something like this: http://lists.nongnu.org/archive/html​/gluster-devel/2013-05/msg00251.html
07:04 glusterbot <http://goo.gl/jcLlw> (at lists.nongnu.org)
07:05 andreask joined #gluster
07:06 StarBeast joined #gluster
07:07 hchiramm_ joined #gluster
07:07 jbrooks joined #gluster
07:10 aravindavk joined #gluster
07:12 rb2k joined #gluster
07:15 vshankar joined #gluster
07:16 hchiramm_ joined #gluster
07:17 jtux joined #gluster
07:19 tjikkun_work joined #gluster
07:22 vshankar joined #gluster
07:24 vshankar joined #gluster
07:26 ndevos joined #gluster
07:28 vpshastry joined #gluster
07:34 aravindavk joined #gluster
07:35 CheRi joined #gluster
07:37 Koma_ joined #gluster
07:43 morse joined #gluster
07:46 vshankar joined #gluster
07:47 hchiramm__ joined #gluster
07:52 vshankar joined #gluster
07:58 kevein joined #gluster
07:59 aravindavk joined #gluster
08:01 vshankar joined #gluster
08:04 vpshastry1 joined #gluster
08:06 isomorphic joined #gluster
08:07 vshankar joined #gluster
08:10 glusterbot New news from newglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
08:15 vshankar joined #gluster
08:22 CheRi joined #gluster
08:22 aravindavk joined #gluster
08:26 spider_fingers joined #gluster
08:26 jbrooks joined #gluster
08:27 vshankar joined #gluster
08:27 raghu joined #gluster
08:33 vshankar joined #gluster
08:33 Guest93223 joined #gluster
08:40 ricky-ticky joined #gluster
08:48 X3NQ joined #gluster
08:51 X3NQ joined #gluster
08:53 aravindavk joined #gluster
08:55 CheRi joined #gluster
09:00 VSpike left #gluster
09:10 glusterbot New news from newglusterbugs: [Bug 959069] A single brick down of a dist-rep volume results in geo-rep session "faulty" <http://goo.gl/eaoet>
09:18 vrturbo joined #gluster
09:22 ramkrsna joined #gluster
09:35 ctria joined #gluster
09:36 sgowda joined #gluster
09:40 glusterbot New news from newglusterbugs: [Bug 952029] Allow an auxiliary mount which lets users access files using only gfids <http://goo.gl/x5z1R>
09:51 rmcintosh_ joined #gluster
09:53 rmcintosh_ I'm using gluster 3.2 and want to upgrade to gluster 3.3. However I've got a split-brain situation with two replicating gluster nodes. Can I upgrade in this state? Will it be able to heal after then upgrade?
09:55 rmcintosh_ I'm using gluster 3.2 and want to upgrade to gluster 3.3. However I've got a split-brain situation with two replicating gluster nodes. Can I upgrade in this state? Will it be able to heal after the upgrade?
09:57 jbrooks joined #gluster
09:59 Guest93223 joined #gluster
10:09 deepakcs joined #gluster
10:21 y4m4 joined #gluster
10:24 psharma joined #gluster
10:28 psharma joined #gluster
10:30 JoeJulian Deformative: I've heard of one guy doing that. You would have to roll your own initrd with fuse and the client.
10:30 JoeJulian foxban_: Yes, you can specify the replica count during add-brick or remove-brick.
10:31 JoeJulian Deformative: Was referring to gluster root..
10:31 foxban_ JoeJulian: Thansk, I've got the answer from the mailing list, thanks anyway
10:31 JoeJulian rmcintosh_: There's nothing in 3.3 that will help nor hinder your split-brain situation.
10:32 tru_tru joined #gluster
10:32 JoeJulian foxban_: excellent. :)
10:32 * JoeJulian goes back to his vacation.
10:33 rmcintosh_ joejullian, Its more that I need the granualar locking of 3.3 as my vms keep stoping when healing as the file is locked
10:33 rmcintosh_ JoeJulllian, also thansk
10:38 JoeJulian rmcintosh_: ah, yes. That is a nice enhancement. Shouldn't be any reason that I can think of not to.
10:41 Norky joined #gluster
10:53 CheRi joined #gluster
10:54 yinyin joined #gluster
11:01 kshlm joined #gluster
11:06 tziOm joined #gluster
11:11 CheRi joined #gluster
11:13 portante joined #gluster
11:14 kkeithley joined #gluster
11:17 Chombly joined #gluster
11:17 lpabon joined #gluster
11:20 bfoster joined #gluster
11:21 rwheeler joined #gluster
11:24 edward1 joined #gluster
11:27 ramkrsna joined #gluster
11:29 ollivera_ joined #gluster
11:30 realdannys1 joined #gluster
11:42 yinyin joined #gluster
11:45 stickyboy left #gluster
11:45 deepakcs joined #gluster
11:45 stickyboy joined #gluster
11:48 partner getting back to previous question, any idea why self-heal daemon logs millions of these?
11:48 partner E [afr-self-heald.c:685:_link_inode_update_loc] 0-classy-replicate-0: inode link failed on the inode (00000000-0000-0000-0000-000000000000)
11:48 partner what i did was i added to a single brick dist setup another brick and made it replica 2
11:50 partner 3.3.1 running on debian wheezy on storage server end, clients being squeeze (2 clients)
11:51 partner i have no longer any clients connected but it keeps flooding, perhaps stopping/starting the volume or kick shd?
11:52 partner its been syncing a whopping 45 GB in hmm 4 days...
11:52 neofob what changes in beta3 from beta2?
12:00 Chombly Hi, anyone nows when version 3.4 will be released
12:01 andreask joined #gluster
12:01 dblack joined #gluster
12:04 aliguori joined #gluster
12:08 plarsen joined #gluster
12:08 kkeithley If you have the source cloned in git you can do `git log release-3.4beta2 release-3.4beta3` to see what changed
12:09 kkeithley 3.4beta3 was released on 6 June. 3.4ga will be released _soon_.
12:14 Chombly Thank you kkeithley
12:22 DarkestMatter joined #gluster
12:23 NcA^ joined #gluster
12:24 samppah joined #gluster
12:24 SteveCooling joined #gluster
12:25 hybrid512 joined #gluster
12:25 ccha joined #gluster
12:26 isomorphic joined #gluster
12:39 balunasj joined #gluster
12:44 kkeithley correction: you can do `git log release-3.4beta2..release-3.4beta3` to see what changed
12:48 jthorne joined #gluster
12:48 kkeithley http://paste.fedoraproject.org/19117/13714733
12:48 glusterbot Title: #19117 Fedora Project Pastebin (at paste.fedoraproject.org)
12:55 bennyturns joined #gluster
12:58 vpshastry joined #gluster
13:00 ctria joined #gluster
13:06 mmalesa joined #gluster
13:08 * neofob reading paste.fedoraproject.org...
13:10 mmalesa hi everyone
13:10 mmalesa i have some hard times adding new peers to the running cluster
13:10 jthorne joined #gluster
13:11 mmalesa i have 4 nodes in the cluster up and running
13:11 mmalesa all running glusterfs 3.3.1 built on Oct 11 2012 21:49:37
13:11 mmalesa i wanted to add another two nodes
13:11 mmalesa gluster peer probe node1 (and node 2)
13:12 mmalesa after this operation nodes are rejected with information that checksums are wrong
13:12 sgowda joined #gluster
13:13 joelwallis joined #gluster
13:14 mmalesa i followed comment 3 at https://bugzilla.redhat.com/show_bug.cgi?id=949625
13:14 glusterbot <http://goo.gl/HtYU5> (at bugzilla.redhat.com)
13:14 glusterbot Bug 949625: unspecified, medium, ---, kaushal, ASSIGNED , Peer rejected after upgrading
13:14 mmalesa but that did not help
13:15 MacRM joined #gluster
13:15 mmalesa does enyone know if there are bugs that prevent adding new peers?
13:15 mmalesa known bugs i meant
13:17 chirino joined #gluster
13:21 mmalesa joined #gluster
13:24 icemax joined #gluster
13:26 icemax Hi guys. Is someone can help me to deal with a gluster of 900Go of datas ?
13:28 Deformative Where does gluster store its persistent space?
13:28 Deformative /etc/glusterd?
13:28 Deformative Anywhere else?
13:29 plarsen joined #gluster
13:29 icemax you mean, where are my datas ?
13:29 kkeithley /var/lib/glusterd
13:30 Deformative kkeithley, Both?  I have stateless machines and I am trying to figure out where I need to put a disk so that the configuration is saved across boot.
13:31 kkeithley both what?  The bricks? They're wherever you put them
13:31 yinyin joined #gluster
13:31 kkeithley where ever you put them when you created the volume.
13:31 Deformative kkeithley, No, I mean the configuration files and stuff.
13:32 kkeithley config files and stuff are in /var/lib/glusterd
13:32 Deformative I don't have that directory, I think it is /etc/glusterd on mine.
13:32 kkeithley how old is the glusterfs that you're running?
13:33 Deformative 3.2.7
13:33 kkeithley On what, ubuntu or debian? The fedora/epel 3.2.7 used /var/lib
13:35 Deformative ubuntu
13:36 MrNaviPa_ joined #gluster
13:36 Deformative I am wondering if this will work.  all of / gets flashed from a read only nfs on boot, but I am thinking if I put /etc/glusterd on a local persistent disk, then my glusterfs work across resets I think.
13:36 Deformative I might also need to put /etc/fstab or something there.
13:40 starheaven joined #gluster
13:44 tziOm joined #gluster
13:50 theron joined #gluster
13:50 nightwalk joined #gluster
13:55 icemax Can you help me with a big gluster between 2 servers (900G of datas) please ? :)
13:55 jamesbravo joined #gluster
13:56 roo9 left #gluster
13:59 mohankumar__ joined #gluster
14:00 vpshastry left #gluster
14:00 mmalesa i'm still getting "State: Peer Rejected (Connected)" after probing peers into running cluster. The only track in logs are errors "Cksums of volume vms differ. local cksum = -1819242818, remote cksum = -2096877370" Gluster version 3.3.1, any ideas?
14:04 Kins joined #gluster
14:06 isomorphic joined #gluster
14:12 forest joined #gluster
14:13 rb2k anybody seen that before: https://gist.github.com/rb2k/1ae48eb​7b196d4079308/raw/9902d64fbdc6498914​e44335139c80bca6778ba0/gistfile1.txt
14:13 glusterbot <http://goo.gl/qXJvQ> (at gist.github.com)
14:13 rb2k glister saying that a host is not connected
14:13 rb2k but it shows up in the peer list as connected
14:14 lpabon joined #gluster
14:14 lpabon joined #gluster
14:24 spider_fingers left #gluster
14:25 chirino joined #gluster
14:29 joelwallis joined #gluster
14:31 rwheeler joined #gluster
14:32 portante joined #gluster
14:33 balunasj joined #gluster
14:35 jamesbravo Sorry if this is not the correct place to ask but I've been asked to investigate problems that our operations team are having with gluster. Ok to describe problem?
14:38 Norky go for it, jamesbravo
14:38 Norky for any multi-line copy'n'pastes, usea  web-based paste bin
14:40 jamesbravo I've described the problem here: http://padfly.com/gluster-testing
14:40 glusterbot Title: gluster-testing PADFLY - Free Online Web Scratchpad/Notepad/Clipboard. No Login. (at padfly.com)
14:41 deepakcs joined #gluster
14:41 forest joined #gluster
14:43 yinyin joined #gluster
14:44 lalatenduM joined #gluster
14:47 Deformative joined #gluster
14:52 bsaggy joined #gluster
14:58 andreask is it possible to create a distributed volume with a filesystem with data on it and a fresh filesystem? ... should a full rebalance than redistribute over both volumes?
14:58 andreask gluster 3.3 btw
14:58 MrNaviPa_ joined #gluster
14:59 harold[MTV] joined #gluster
15:00 jamesbravo No rush, just wanted to make sure padfly link I sent makes sense?
15:09 bennyturns joined #gluster
15:15 bugs_ joined #gluster
15:21 portante joined #gluster
15:27 Norky it does. I'm not experienced enough to help I'm afrid, hopefully someone who is will be along soon
15:28 Norky I made some minor punctuation fixed to make a it a bit more readable - I hope you don't mind
15:28 chouchins joined #gluster
15:34 jamesbravo no problem, thanks for looking
15:38 realdannys1 Anybody here able to help me find out why I need to make my GlusterFS mounted folder have 777 permissions for Wordpress to write for it, while Wordpress can write to my fuse mounted S3 bucket on the server (in the same folder as gluster) with just 755 permissions on that folder?
15:41 manik joined #gluster
15:50 bala joined #gluster
15:53 tziOm joined #gluster
15:57 zaitcev joined #gluster
16:09 portante_ joined #gluster
16:13 andreask joined #gluster
16:14 Joe__ joined #gluster
16:16 Joe__ joined #gluster
16:18 saurabh joined #gluster
16:22 manik joined #gluster
16:28 forest_ joined #gluster
16:29 MrNaviPa_ joined #gluster
16:40 harold[MTV] joined #gluster
17:02 jruggiero left #gluster
17:04 MrNaviPacho joined #gluster
17:13 satheesh joined #gluster
17:14 kkeithley @yum
17:14 glusterbot kkeithley: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
17:23 clutchk Hey anyone know how to get glusterfs to mount read only client side?
17:28 MrNaviPacho joined #gluster
17:34 ricky-ticky joined #gluster
17:53 RicardoSSP joined #gluster
17:53 RicardoSSP joined #gluster
17:55 chirino joined #gluster
17:56 lalatenduM joined #gluster
17:59 semiosis clutchk: see bug 853895 -- read only mount feature broke in 3.3.0, the fix should be in 3.4.0 when it's released
17:59 glusterbot Bug http://goo.gl/xCkfr medium, medium, ---, csaba, ON_QA , CLI: read only glusterfs mount fails
18:00 semiosis afk
18:01 clutchk thx semiosis.
18:02 portante_ joined #gluster
18:02 rwheeler joined #gluster
18:09 andreask joined #gluster
18:18 chirino joined #gluster
18:20 neofob left #gluster
18:21 dewey joined #gluster
18:24 jclift joined #gluster
18:30 andrewjsledge joined #gluster
18:33 forest joined #gluster
18:37 ramkrsna joined #gluster
18:40 MrNaviPa_ joined #gluster
18:53 turf212 joined #gluster
18:59 isomorphic joined #gluster
19:00 madd joined #gluster
19:07 sonne joined #gluster
19:14 dblack joined #gluster
19:15 neofob joined #gluster
19:16 DEac- joined #gluster
19:32 Rhomber joined #gluster
19:47 realdannys1 joined #gluster
19:56 turf212 joined #gluster
19:58 turf212 Hi.  I'm looking for some help with a new installation.
19:58 turf212 I'm testing this on a couple of Amazon EC2 servers
19:59 turf212 I have the two servers set up as peers and they report that they are OK
20:00 turf212 The issue is that after I create the gluster volume the server reports that there are no volumes present
20:00 semiosis turf212: ,,(pasteinfo)
20:00 glusterbot turf212: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:01 semiosis also, what linux distro?  what version of glusterfs?
20:01 turf212 the volume create command does take some time to come back and the volume can start without errors
20:01 turf212 just nothing being reported
20:01 turf212 Ubuntu 12.04 LTS
20:02 turf212 GLusterfs version 3.3.1
20:02 semiosis could you please re-state "nothing being reported" in terms of "i ran this command ... and I got this output ... "
20:02 semiosis just so we're clear
20:02 turf212 yep.
20:02 turf212 just pasting command/output now :)
20:03 turf212 http://ur1.ca/ecrj6
20:03 glusterbot Title: #19224 Fedora Project Pastebin (at ur1.ca)
20:03 semiosis sure enough
20:04 turf212 I was following the instructions from the startup guide
20:04 semiosis how about 'gluster peer status' also on machine ip-10-34-194-155
20:05 turf212 had some firewall (amazon) settings which needed changed to allow the 24007 - 24019 port access
20:05 turf212 here you go - http://ur1.ca/ecrjk
20:05 glusterbot Title: #19225 Fedora Project Pastebin (at ur1.ca)
20:08 turf212 servers are using entries in the hosts file with an elastic IP associated with each server.  Hosts file entries are the same on both servers.
20:09 turf212 I have read/understand  that this may not be exactly correct, but its a test instance to see how it could sit on our infrastructure.
20:10 turf212 I have tried creating the volume using IP address and also with the names in the hosts file.
20:10 turf212 both suffer the same issue.
20:14 turf212 The volume create command returns code 110
20:15 turf212 i.e. http://ur1.ca/ecrm1
20:15 glusterbot Title: #19227 Fedora Project Pastebin (at ur1.ca)
20:17 forest joined #gluster
20:20 turf212 any thoughts/suggestions?
20:24 tjikkun joined #gluster
20:28 MrNaviPa_ joined #gluster
20:30 rb2k joined #gluster
20:44 badone joined #gluster
20:48 lkoranda_ joined #gluster
20:57 JoeJulian_ joined #gluster
20:58 Deformative Hi.  My gluster server also runs an nfs server.
20:58 Deformative But they dont' seem to be both accessable at hte same time.
20:58 Deformative When my gluster volume is started, the clients cannot mount the nfs exports.
20:58 Deformative Anyone know why?
21:06 Deformative No one?
21:06 Deformative This is really a problem.
21:09 x4rlos joined #gluster
21:10 semiosis Deformative: you can't have two nfs servers on the same host, at least, not when one of them is gluster-nfs
21:10 semiosis sorry
21:10 Deformative Is  there a way to turn off the nfs part of gluster?
21:10 semiosis yes, see nfs.disable volume option in 'gluster volume set help'
21:10 Deformative So then it shold work?
21:11 semiosis you'll need to set nfs.disable to 'on' for all volumes
21:11 semiosis yes
21:11 Deformative Awesome.
21:11 Deformative Thanks.
21:11 semiosis yw
21:12 Deformative I can't seem to figure out the command line for it.
21:12 semiosis gluster volume set $volname nfs.disable on
21:12 Deformative Awesome, thanks.
21:14 Deformative If I stop a volume, how do I restart it?
21:14 Deformative It says volume does not exist when I try the naive way.
21:14 semiosis thats odd
21:14 semiosis should just be 'gluster volume start $volname'
21:14 Deformative gluster volume info
21:14 Deformative Volume Name: home
21:14 Deformative Type: Replicate
21:14 Deformative Status: Stopped
21:14 Deformative Number of Bricks: 2
21:14 Deformative Transport-type: tcp
21:14 Deformative Bricks:
21:15 semiosis stop
21:15 Deformative Brick1: m60-001:/gluster/brick
21:15 Deformative Brick2: m60-002:/gluster/brick
21:15 Deformative root@simpool:/home/pooladmin# gluster volume start home
21:15 Deformative Volume home does not exist
21:15 Deformative Sorry for spam.
21:15 semiosis please use pastie.org or similar
21:15 Deformative Should have pastebinned
21:15 semiosis or glusterbot will kick you
21:16 Deformative http://pastie.org/8053318
21:16 glusterbot Title: #8053318 - Pastie (at pastie.org)
21:17 Deformative semiosis, Any ideas?
21:19 jthorne joined #gluster
21:20 turf212 semiosis, any ideas on my issue?  Still stuck unable to create a volume.  It looks like there is a locking issue.  http://ur1.ca/ecs77
21:20 glusterbot Title: #19239 Fedora Project Pastebin (at ur1.ca)
21:25 cjh_ joined #gluster
21:25 cjh_ if i want to get the glusterd process to reload it's config files will sending SIGHUP to it do the job?
21:26 forest joined #gluster
21:30 semiosis cjh_: you should not be editing config files since glusterfs 3.1.0, what version are you using?
21:30 semiosis Deformative, turf212, did either of you clone servers?
21:31 Deformative No, I succeeded in messing everything up though so I am restarting.
21:31 semiosis Deformative: better luck this time
21:32 cjh_ semiosis: 3.3.1 :)
21:32 turf212 no.  Started with 2 Amazon EC2 instances and installed packages from ppa
21:32 semiosis cjh_: then you shouldn't need to edit config files
21:33 cjh_ semiosis: ok thanks :)
21:33 semiosis cjh_: what are you trying to accomplish?
21:34 badone joined #gluster
21:37 rb2k joined #gluster
21:41 cjh_ semiosis: just trying to manually add a peer to the file and get glusterd to reload the peers file.  i noticed it caches it.  i can blow away the files and peer status still shows the same stuff
21:42 MrNaviPa_ joined #gluster
21:42 cjh_ it looks like SIGHUP calls reincarnate on glusterfsd so that's probably the thing i'm looking for
21:42 Deformative semiosis, I think that nfs.disable may have worked worked, but now I get mountall: disconnected from plymouth
21:42 Deformative Because i tried to mount the glusterfs from fstab I guess
21:42 semiosis cjh_: why don't you just use peer probe?
21:43 semiosis Deformative: never seen that before
21:43 Deformative It doesn't seem to keep anything from working.
21:43 Deformative Just a weird error.
21:44 cjh_ semiosis: i'm experimenting with using different systems to managing the cluster instead of glusterd
21:44 semiosis cjh_: have fun :)
21:44 semiosis what other systems?
21:45 cjh_ lol
21:46 cjh_ semiosis: maybe chef or something else.  i'm still looking around
21:47 semiosis how about zookeeper?
21:47 cjh_ that would be another option
21:47 cjh_ some central source of truth
21:49 Deformative Hmm, frustrating.
21:49 Deformative It won't let me add more bricks to my volume.
21:51 Deformative Also, is there a way to make a non-replica drive into a replica later?
21:53 semiosis you can change the replica count *for a volume* using add-brick replica N
21:53 semiosis or remove-brick
21:53 semiosis similarly
21:59 Kins joined #gluster
22:00 yinyin joined #gluster
22:01 turf212 seniosis, still not getting anything.  I've started the daemon in debug mode and rerun the command.  the output is here - http://fpaste.org/19249/
22:01 glusterbot Title: #19249 Fedora Project Pastebin (at fpaste.org)
22:02 turf212 *semiosis, even
22:03 Deformative Is there a way to make gluster mount on the machine that is serving it at boot?
22:03 Deformative IT doesn't seem to work in /etc/fstab
22:03 Deformative Is there somewhere else i can put the command?
22:05 turf212 /etc/rc.local ?
22:06 turf212 or you could create a startup script in /etc/init.d and link to it from /etc/rc directoru
22:06 turf212 directories, even.
22:06 Deformative Ok, I will try it.
22:08 semiosis Deformative: can you please provide the client log file showing the failed mount attempt at boot?  (via pastie.org)
22:08 semiosis also, what is the fstab line?
22:08 semiosis can you paste it here?
22:10 Deformative 10.11.0.1:/home /home glusterfs defaults,_netdev 0 0 is the line.
22:10 Deformative I need to reboot real fast and I will get the log.
22:13 Deformative semiosis, Which file are you interested in?
22:13 forest joined #gluster
22:14 semiosis it's probably /var/log/glusterfs/home.log or similar
22:14 semiosis named for the mount point
22:14 Deformative http://pastie.org/8053487
22:15 glusterbot Title: #8053487 - Pastie (at pastie.org)
22:15 semiosis Deformative: fstab notes: 1. _netdev doesnt do anything on ubuntu.  2. 'defaults' is a placeholder when you have no other options, if you have any other options, you can get rid of defaults.  3. you should use 'nobootwait' (an ubuntu fstab option) for network mounts like glusterfs
22:16 Deformative Oh, I got that line from the gluster docs.
22:17 semiosis those are for redhat distros (rhel/cent/fedora)
22:17 semiosis usually
22:17 Deformative Oh.
22:17 semiosis Deformative: log line: reading from socket failed. Error (Connection reset by peer), peer (10.11.0.1:24007)
22:17 semiosis that means the server rejected your client
22:17 Deformative The server and client are the same machien.
22:18 semiosis hmmmmmm
22:18 lkoranda joined #gluster
22:19 Deformative Adding nobootwait didn't seem to fix it
22:23 Deformative semiosis, On boot I get Starting GlusterFS Management Daemon fail, btw.
22:24 semiosis adding nobootwait wont fix your problems, but it will prevent problems from blocking the system boot
22:24 semiosis it's a safety valve
22:24 semiosis not a fix
22:25 semiosis Deformative: pastie.org the glusterd log file, /var/log/glusterfs/etc-glusterfs-glusterd.log
22:25 semiosis or similar
22:27 Deformative http://pastie.org/8053512
22:27 glusterbot Title: #8053512 - Pastie (at pastie.org)
22:28 Deformative df says Transport endpoint is not connected btw.
22:35 Deformative semiosis, It won't let me start hte server unless I disconnect all the peers.
22:35 Deformative Erm.
22:36 Deformative Stop the volume I mean
22:36 Deformative Not start hte server
22:41 Chombly joined #gluster
22:42 Deformative semiosis, Are you still there?
22:43 semiosis yes but busy
22:43 Deformative Ok.
22:44 semiosis Deformative: can you try starting over using my ,,(ppa) packages?
22:44 glusterbot Deformative: The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
22:44 semiosis try the 3.3 or 3.3 QA repo
22:45 Deformative Ok.
22:46 StarBeast joined #gluster
22:47 Deformative semiosis, What is the difference between qa and non-qa?
22:50 Deformati joined #gluster
22:58 tg2 joined #gluster
23:00 yinyin joined #gluster
23:00 fcami joined #gluster
23:17 semiosis qa is a future release
23:17 semiosis 3.3.2 will be out sooner or later
23:18 semiosis 3.3.1 is the latest generally available glusterfs release
23:18 semiosis Deformati:
23:19 Deformati I can't seem to install that on my chrooted image.
23:20 jbrooks joined #gluster
23:30 Chombly joined #gluster
23:33 semiosis why not?
23:33 Deformati It fails to install nfs-common
23:34 Chombly joined #gluster
23:38 semiosis that's odd
23:38 semiosis why does that fail?  it's a main package
23:38 Deformati Fails to start something.
23:38 Deformati I don't think it likess being in chroot.
23:38 Chombly joined #gluster
23:39 Deformati I guess i will be back tomorrow.
23:39 Deformati I am tired.
23:49 yinyin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary