Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 Harlock_ joined #gluster
00:05 Harlock_ Hi there
00:08 MugginsM yo
00:11 Harlock_ Hi Muggins
00:12 Harlock_ I'm working hard to understand better glusterfs
00:12 Harlock_ So... I thinked to come here
00:12 Harlock_ To learn from who knows more than me about.
00:14 sprachgenerator joined #gluster
00:14 Harlock_ hi sprach...
00:15 Harlock_ well... I will come back again, actually the channel seems quite empty....
00:26 _pol joined #gluster
00:34 zerick joined #gluster
00:39 asias joined #gluster
00:42 hagarth joined #gluster
00:50 andrewklau left #gluster
01:10 jporterfield joined #gluster
01:18 jporterfield joined #gluster
01:19 kevein joined #gluster
01:46 jporterfield joined #gluster
01:58 an joined #gluster
01:58 mohankumar joined #gluster
02:11 _pol joined #gluster
02:13 RameshN joined #gluster
02:25 bharata-rao joined #gluster
02:29 asias joined #gluster
02:54 Chr1z joined #gluster
02:54 glusterbot New news from newglusterbugs: [Bug 990900] Dist-geo-rep : imaster in cascaded geo-rep fails to do first xsync crawl and consequently fail to sync files to level2 slave <http://goo.gl/rGVCXz>
02:56 an joined #gluster
02:56 Chr1z If I add say 2x 40gb bricks from 2 hosts… and want to add 2 more… will that still give me 40gb total usable?
02:56 rjoseph joined #gluster
03:00 kshlm joined #gluster
03:07 nightwalk joined #gluster
03:13 vshankar joined #gluster
03:18 shubhendu joined #gluster
03:25 nightwalk joined #gluster
03:30 ajha joined #gluster
03:42 tryggvil joined #gluster
03:46 vshankar joined #gluster
03:48 itisravi joined #gluster
03:55 glusterbot New news from newglusterbugs: [Bug 994353] Dist-geo-rep: Worker in one of the master node keeps crashing and the session in that node is always faulty. <http://goo.gl/tOscXf> || [Bug 994461] Remove options that are deprecated in Big Bend (Geo-replication commands in particular) <http://goo.gl/ykSnXk> || [Bug 994462] Dist-geo-rep : geo-rep failover-failback is broken : special-sync-mode blind results in faulty state. <http:
04:07 jporterfield joined #gluster
04:15 kanagaraj joined #gluster
04:17 davinder joined #gluster
04:17 jporterfield joined #gluster
04:18 shylesh joined #gluster
04:25 glusterbot New news from newglusterbugs: [Bug 1003805] Dist-geo-rep : geo-rep failed to sync few of the hardlinks to one of the slaves, when there are many. <http://goo.gl/gZZ9Li> || [Bug 1003807] Dist-geo-rep: If a single slave node to which all the master nodes do aux mount goes down , all geo-rep status goes to Faulty. <http://goo.gl/fIhFwV>
04:30 kPb_in_ joined #gluster
04:31 johnbot11111 joined #gluster
04:36 johnbot11111 test
04:39 johnbot11 joined #gluster
04:39 dusmant joined #gluster
04:41 nueces joined #gluster
04:44 ndarshan joined #gluster
04:46 vpshastry joined #gluster
04:48 bala joined #gluster
04:49 lalatenduM joined #gluster
04:51 lanning grr...
04:51 lanning I am only getting about 200Mb on a 10GB via the glusternfs
04:52 lanning glusternfs process has a core pegged at 100% (machine is 91% idle)
04:53 ppai joined #gluster
04:54 lalatenduM joined #gluster
05:02 arusso joined #gluster
05:08 jporterfield joined #gluster
05:13 tg2 joined #gluster
05:20 psharma joined #gluster
05:21 shruti joined #gluster
05:31 anands joined #gluster
05:34 satheesh1 joined #gluster
05:37 raghu joined #gluster
05:37 bulde joined #gluster
05:42 vshankar joined #gluster
05:42 kPb_in joined #gluster
05:43 spandit joined #gluster
05:51 ababu joined #gluster
05:54 sgowda joined #gluster
05:55 aib_007 joined #gluster
05:57 nshaikh joined #gluster
06:01 shubhendu joined #gluster
06:07 andreask joined #gluster
06:11 hagarth joined #gluster
06:18 shubhendu joined #gluster
06:21 jtux joined #gluster
06:29 ababu joined #gluster
06:34 andreask joined #gluster
06:36 rastar joined #gluster
06:47 kPb_in joined #gluster
06:52 DV__ joined #gluster
06:54 vimal joined #gluster
06:55 dusmant joined #gluster
06:55 ndarshan joined #gluster
06:57 jag3773 joined #gluster
06:58 jtux joined #gluster
06:58 meghanam joined #gluster
06:58 meghanam_ joined #gluster
07:06 ctria joined #gluster
07:26 ngoswami joined #gluster
07:37 andreask joined #gluster
07:40 ProT-0-TypE joined #gluster
07:40 badone joined #gluster
07:41 ProT-0-TypE joined #gluster
07:41 ProT-0-TypE joined #gluster
07:43 ProT-0-TypE joined #gluster
07:44 ProT-0-TypE joined #gluster
07:45 ProT-0-TypE joined #gluster
07:56 andreask joined #gluster
07:56 ProT-0-TypE joined #gluster
07:57 ProT-0-TypE joined #gluster
07:57 andreask joined #gluster
07:58 ProT-0-TypE joined #gluster
07:58 andreask joined #gluster
07:58 ProT-0-TypE joined #gluster
07:59 ProT-0-TypE joined #gluster
08:00 ProT-0-TypE joined #gluster
08:00 dkorzhevin joined #gluster
08:01 ProT-0-TypE joined #gluster
08:01 ProT-0-TypE joined #gluster
08:02 ProT-0-TypE joined #gluster
08:03 dusmant joined #gluster
08:03 ProT-0-TypE joined #gluster
08:03 ndarshan joined #gluster
08:04 ProT-0-TypE joined #gluster
08:04 ProT-0-TypE joined #gluster
08:05 ProT-0-TypE joined #gluster
08:09 ProT-0-TypE joined #gluster
08:15 jporterfield joined #gluster
08:19 puebele1 joined #gluster
08:22 ProT-0-TypE joined #gluster
08:23 aravindavk joined #gluster
08:25 StarBeast joined #gluster
08:27 ProT-0-TypE joined #gluster
08:30 yosafbridge joined #gluster
08:30 kshlm joined #gluster
08:34 ProT-0-TypE joined #gluster
08:34 mooperd_ joined #gluster
08:35 ProT-0-TypE joined #gluster
08:36 ProT-0-TypE joined #gluster
08:39 kshlm joined #gluster
08:41 kshlm joined #gluster
08:42 ProT-O-TypE joined #gluster
08:44 ddp23 left #gluster
08:44 ProT-0-TypE joined #gluster
08:49 jporterfield joined #gluster
08:51 ricky-ticky joined #gluster
08:51 eseyman joined #gluster
08:59 tru_tru joined #gluster
09:06 ababu joined #gluster
09:08 spandit joined #gluster
09:18 spandit joined #gluster
09:23 mbukatov joined #gluster
09:26 dusmant joined #gluster
09:30 manik joined #gluster
09:32 SteveCooling joined #gluster
09:34 satheesh1 joined #gluster
09:43 dusmant joined #gluster
09:46 tryggvil joined #gluster
09:46 spandit joined #gluster
09:51 eseyman joined #gluster
09:52 davinder2 joined #gluster
09:53 rastar joined #gluster
09:56 glusterbot New news from newglusterbugs: [Bug 996379] gsyncd errors out on start <http://goo.gl/Bgv8Ja>
10:00 tziOm joined #gluster
10:00 ahomolya_ joined #gluster
10:12 andreask joined #gluster
10:34 RameshN joined #gluster
10:39 shruti joined #gluster
10:39 saurabh joined #gluster
10:42 glusterbot New news from resolvedglusterbugs: [Bug 965995] quick-read and open-behind xlator: Make options (volume_options ) structure NULL terminated. <http://goo.gl/kOtWms> || [Bug 961691] CLI crash upon executing "gluster peer status " command <http://goo.gl/1QcVzK> || [Bug 846240] [FEAT] quick-read should use anonymous fd framework <http://goo.gl/FDbuE> || [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.
10:45 haritsu joined #gluster
10:48 haritsu_ joined #gluster
10:49 haritsu joined #gluster
10:53 kanagaraj joined #gluster
10:54 ndarshan joined #gluster
10:56 aravindavk joined #gluster
10:57 shubhendu joined #gluster
10:58 dusmant joined #gluster
10:58 bulde1 joined #gluster
10:59 ababu joined #gluster
10:59 DV__ joined #gluster
11:03 kkeithley1 joined #gluster
11:09 mohankumar joined #gluster
11:11 rastar joined #gluster
11:19 ppai joined #gluster
11:20 eseyman joined #gluster
11:26 anands joined #gluster
11:27 sgowda joined #gluster
11:28 mohankumar joined #gluster
11:31 satheesh1 joined #gluster
11:35 andreask joined #gluster
11:36 aravindavk joined #gluster
11:37 shruti joined #gluster
11:38 kanagaraj joined #gluster
11:40 shubhendu joined #gluster
11:41 jclift_ joined #gluster
11:42 eseyman joined #gluster
11:42 glusterbot New news from resolvedglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
11:44 dusmant joined #gluster
11:44 hagarth joined #gluster
11:45 sprachgenerator joined #gluster
11:46 bulde joined #gluster
11:48 manik joined #gluster
11:56 ndarshan joined #gluster
11:58 rastar joined #gluster
11:58 haritsu joined #gluster
11:59 edward1 joined #gluster
12:01 mohankumar joined #gluster
12:10 failshell joined #gluster
12:13 glusterbot New news from resolvedglusterbugs: [Bug 983975] Not all tests call 'cleanup' in the end, causing difficulties with single test runs <http://goo.gl/IZZj1>
12:21 ProT-0-TypE joined #gluster
12:27 glusterbot New news from newglusterbugs: [Bug 1004751] glusterfs-api.pc.in contains an rpath <http://goo.gl/IxuvNt> || [Bug 1004756] Not all tests call 'cleanup' in the end, causing difficulties with single test runs <http://goo.gl/NSdcM7>
12:27 hagarth @channelstats
12:27 glusterbot hagarth: On #gluster there have been 178315 messages, containing 7471740 characters, 1247822 words, 4959 smileys, and 659 frowns; 1085 of those messages were ACTIONs. There have been 69494 joins, 2145 parts, 67323 quits, 23 kicks, 167 mode changes, and 7 topic changes. There are currently 233 users and the channel has peaked at 239 users.
12:29 shubhendu joined #gluster
12:40 ppai joined #gluster
12:45 robo joined #gluster
12:49 vpshastry joined #gluster
12:51 B21956 joined #gluster
12:53 mohankumar joined #gluster
12:57 glusterbot New news from newglusterbugs: [Bug 1004100] smbd crashes in libglusterfs under heavy load <http://goo.gl/XRcTC1>
13:06 chirino joined #gluster
13:08 Cenbe joined #gluster
13:09 shubhendu joined #gluster
13:14 rastar joined #gluster
13:15 mattf joined #gluster
13:15 rwheeler joined #gluster
13:19 haritsu joined #gluster
13:21 hagarth joined #gluster
13:25 bennyturns joined #gluster
13:26 vpshastry joined #gluster
13:29 vpshastry left #gluster
13:29 theron joined #gluster
13:31 sgowda joined #gluster
13:33 anands joined #gluster
13:37 aliguori joined #gluster
13:40 bugs_ joined #gluster
13:41 lkoranda_ joined #gluster
13:41 rcheleguini joined #gluster
13:43 jskinner joined #gluster
13:45 mohankumar joined #gluster
13:51 davinder joined #gluster
13:53 kaptk2 joined #gluster
14:05 lpabon joined #gluster
14:06 dusmant joined #gluster
14:12 jdarcy joined #gluster
14:19 neofob left #gluster
14:20 Alpinist joined #gluster
14:23 DV__ joined #gluster
14:24 haritsu joined #gluster
14:39 Alpinist joined #gluster
14:44 shylesh joined #gluster
14:47 saurabh joined #gluster
14:48 nightwalk joined #gluster
14:50 jporterfield joined #gluster
14:59 glusterbot New news from newglusterbugs: [Bug 1002945] Tracking an effort to convert the listed test cases to standard regression test format. <http://goo.gl/SMLUb1>
15:01 awheeler joined #gluster
15:02 rjoseph joined #gluster
15:04 manik joined #gluster
15:08 daMaestro joined #gluster
15:09 premera joined #gluster
15:10 LoudNoises joined #gluster
15:15 compbio left #gluster
15:15 haritsu joined #gluster
15:23 dewey joined #gluster
15:33 robo joined #gluster
15:34 _pol joined #gluster
15:41 johnbot11 joined #gluster
15:43 glusterbot New news from resolvedglusterbugs: [Bug 913544] stub unwind of ftruncate leads to crash <http://goo.gl/8pvlWX>
15:44 robo joined #gluster
15:45 sprachgenerator joined #gluster
15:47 anands joined #gluster
15:52 bulde joined #gluster
15:55 satheesh joined #gluster
16:01 soukihei joined #gluster
16:03 zerick joined #gluster
16:07 _pol joined #gluster
16:09 zaitcev joined #gluster
16:11 squizzi joined #gluster
16:11 Mo__ joined #gluster
16:14 Harlock_ joined #gluster
16:14 Harlock_ Hi
16:14 glusterbot Harlock_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:16 Harlock_ I wish to build a fully open source solution to serve iSCSI in HA, using (top down): centos, ZFS on linux, GlusterFS, Corosync, CMAN and iSCSI target (maybe LIO, because it support ALUA)
16:16 _pol joined #gluster
16:17 Harlock_ Actually I'm worked on and I reached some goal, from down top I stoped at cotosync/cman level...
16:18 Harlock_ I don't know if is possible to make two separate iscsi target working as a multipath (to serve VMware for example)
16:19 Harlock_ So, hope to read some answer.... Thank you in advance.
16:21 davinder [socket.c:514:__socket_rwv] 0-glusterfs: readv failed (No data available)
16:21 davinder I am getting this error in log while starting gluster deamon
16:22 davinder I have stopped the deamon
16:22 davinder but no wit is not starting
16:22 davinder Starting glusterd:                                         [FAILED]
16:29 davinder I want to change the IP address of one gluster server
16:29 davinder after that seconf gluster server failed to start
16:30 davinder any process what we have to do
16:33 LoofAB joined #gluster
16:34 LoofAB Can gluster provide IOPS as an easily accessible metric? that is, I want to know how much bandwidth users are using. Both instant and peak/avg for a span of time
16:38 LoofAB Basically not just profiling for a short time... but constant, available via API and not as detailed (block breakdown, etc) as profiling.
16:39 xoritor_ joined #gluster
16:39 xoritor_ are there any docs on how to convert from distributed-replicated to just replicated?  ie... mirror all data on all bricks
16:47 ricky-ticky joined #gluster
16:47 crnkyadmin joined #gluster
16:48 crnkyadmin Hello all. Any suggestions for how to make self-heal not monopolize all of the CPU on a server?
16:49 xoritor_ renice it to +20?
16:49 crnkyadmin and then drop glusterfs back to normal once the heal is done?
16:49 crnkyadmin err glusterfsd
16:50 xoritor_ I dunno... just a guess, but I would assume that would work and yea I would put it back to normal once it finishes
16:51 xoritor_ maybe use +15 instead of +20 as it may not get any time at +20
16:51 crnkyadmin but no built-in tuning parameter that can tell gluster to play nice while healing?
16:52 xoritor_ not sure, let me dig in the docs real quick... what version are you running?
16:52 crnkyadmin 3.3.1
16:53 xoritor_ what command are you using?
16:53 xoritor_ rebalance?
16:54 xoritor_ or heal?
16:54 xoritor_ or is it happening automatically?
16:54 crnkyadmin no, it just started chewing up CPU after starting up gluster on one of the nodes. I am assuming it is self-heal
16:55 xoritor_ local lan? or geo replication?
16:55 crnkyadmin local(ish) - they are AWS instances in the same region, but different availability zones
16:56 kPb_in_ joined #gluster
16:56 xoritor_ im not seeing anything in the docs, and I am no expert, I would try the renice
16:57 crnkyadmin that might be worth a try, but seems a bit hackish for a production environment
16:58 anands joined #gluster
16:59 xoritor_ LOL... well i don't know if i would call it hackish as its a core function for that purpose
16:59 xoritor_ it is not automated i will give you that
16:59 xoritor_ another way may be to set cpu limiting via pam
17:00 vpshastry joined #gluster
17:00 xoritor_ or maybe even make a cgroup to deal with it
17:02 crnkyadmin CentOS5... I don't think that has Cgroups, but good idea
17:03 _pol_ joined #gluster
17:03 xoritor_ hmm... it may have some limited cgroup management in the later versions ie... 5.7+
17:04 xoritor_ seems you may be right
17:05 xoritor_ heh... "cgroups was extremely invasive so you'll never see that in RHEL 5," Tim Burke
17:05 Humble joined #gluster
17:06 crnkyadmin but I guess the bottom line is that chewing CPU (200% on a 2 core instance) during self-heal is normal and expected and there is no tuning build in to gluster to tame that?
17:07 xoritor_ i would expect that to happen... and i have not seen it; again, that does not mean it is not there as I am no expert
17:07 xoritor_ maybe one of the other people here can get a better answer to that for you
17:08 xoritor_ i just popped in to try and get someone to help me change from distributed-replicated to replicated without data loss
17:08 xoritor_ ;-)
17:08 StarBeast joined #gluster
17:09 jdarcy joined #gluster
17:15 _pol joined #gluster
17:17 _pol joined #gluster
17:18 xoritor joined #gluster
17:18 vpshastry joined #gluster
17:30 glusterbot New news from newglusterbugs: [Bug 997140] Gluster NFS server dies <http://goo.gl/7aXct0>
17:33 GabrieleV joined #gluster
17:44 _pol joined #gluster
17:46 StarBeast joined #gluster
17:48 lalatenduM joined #gluster
17:48 ricky-ticky joined #gluster
17:53 manik joined #gluster
17:59 xoritor can anyone please tell me how to change a distributed-replicated volume to just replicated?
17:59 xoritor please
18:00 xoritor i am digging through lots of docs, but nothing states how to do that
18:00 tg2 how are the bricks organized
18:01 tg2 servers/bricks + replication/distribution config
18:01 xoritor 4 servers each with 1 brick 2 x 2
18:02 xoritor i have some data on the second 2 that needs to sync back first though
18:02 tg2 how large are teh bricks
18:02 tg2 and how much total space do you have per replicaset
18:02 xoritor 2TB
18:02 xoritor each
18:02 tg2 (i'm assuming you're replicating then distributing to both replicas?)
18:02 tg2 ie raid 1 * 2 + raid 0 across them
18:02 tg2 type setup
18:02 xoritor yea
18:02 tg2 two mirrors and then a distributed across them?
18:03 xoritor exactly
18:03 tg2 so you just want to remove the two mirrors
18:03 xoritor i have to make sure that the data syncs back first though
18:03 xoritor :-/
18:03 tg2 i'd like to say that remove-brick should be smart enough to do this
18:03 tg2 but i've had issues with it in only a distributed setup, so I can't vouch for that
18:03 xoritor LOL
18:04 xoritor ok... well its not mission critical data, so i can restore it if i blow it all away... i would just like to NOT have to do that
18:05 xoritor where should i run the remove-brick?
18:10 xoritor well it removed the bricks... but no data got synced back
18:10 xoritor lol
18:10 xoritor rsync work for this?
18:12 crnkyadmin1 joined #gluster
18:13 crnkyadmin left #gluster
18:16 tg2 you need to do remove-brick start
18:16 tg2 but either way
18:16 tg2 you're removing the brick that is the mirrored brick right?
18:16 tg2 the remaining brick should be there with a copy of its data
18:17 tg2 if not yeah you can always just rsync it out of the root directory on both replicas into a single directory and you should have your entire dataset
18:17 xoritor no i am removing the 2 distributed mirrors
18:18 xoritor err... the 2 servers that are a mirror that are part of the distributed group
18:18 xoritor ie.. s1+s2 <--->  s3+s4  i am removing s3+s4 and need that data back onto s1+s2
18:19 xoritor + == mirror and <---> == distribute
18:19 xoritor s/mirror/replicate/
18:19 glusterbot What xoritor meant to say was: err... the 2 servers that are a replicate that are part of the distributed group
18:20 xoritor my head is starting to hurt
18:21 tg2 say you have 4 bricks ok
18:21 tg2 2 on each side
18:21 tg2 those 2 are mirrored
18:21 tg2 so like you have two raid 1 arrays
18:22 tg2 then you are distributing files across both (sort of like raid 0)
18:22 xoritor right... ie... s1+s2
18:22 xoritor right 0+1
18:22 tg2 you can remove s1 and s3
18:22 tg2 and still have your data
18:22 xoritor or 1+0 or 10 rather
18:22 rotbeard joined #gluster
18:22 xoritor but not the s3+s4 replica
18:23 tg2 no thats not a replica, its half of the distribution
18:23 xoritor ie... the stripe can not be broken and retain all of the data
18:23 xoritor right... sorry... terminology
18:23 tg2 if you did this with conventional drives, an dyou removed more than 1 drive on the same side of the stripe
18:23 tg2 it would fail
18:23 tg2 so if you removed s1+s2
18:23 tg2 you'd lose your data since s3 and s4 have identical copies of half of it
18:24 xoritor no way to get to that data... i get it
18:24 tg2 you can remove any combo of s1 + (s3 or s4)
18:24 tg2 or s2 + (s3 or s4)
18:24 tg2 but not s1 + s2
18:24 tg2 or s3 + s4
18:24 xoritor but that data really is there... just not "there" to glusterfs... so rsync it back should work right?
18:24 xoritor maybe...
18:24 tg2 yeah
18:24 tg2 if you rsync it back from its underlying directory into the gluster array
18:25 tg2 it'll be there
18:25 tg2 so you can do
18:25 tg2 remove-brick [...] s1 start
18:25 tg2 it should just remove it from the array
18:25 tg2 then commit
18:25 tg2 your gluster volume should still have all its data
18:25 tg2 as you only removed a mirror
18:26 tg2 thats if you did replicate on top of distribute
18:34 dbruhn joined #gluster
18:38 xoritor tg2, thanks
18:38 xoritor it just makes my head hurt...
19:00 robo joined #gluster
19:14 ProT-0-TypE joined #gluster
19:16 vpshastry left #gluster
19:25 chirino joined #gluster
19:55 luca___ joined #gluster
19:55 nessuino hello, everybody!
19:57 mattf hello nessuino
19:59 robo joined #gluster
20:07 sprachgenerator joined #gluster
20:09 ProT-0-TypE joined #gluster
20:14 nessuino left #gluster
20:21 squizzi joined #gluster
20:28 johnbot11 joined #gluster
20:31 robo joined #gluster
21:03 robo joined #gluster
21:03 mooperd_ joined #gluster
21:07 edong23 joined #gluster
21:09 sac`away joined #gluster
21:10 nightwalk joined #gluster
21:24 rcheleguini joined #gluster
21:33 edong23 joined #gluster
21:34 johnbot11 joined #gluster
21:43 mmalesa joined #gluster
21:46 jporterfield joined #gluster
21:46 jcsp joined #gluster
21:47 tryggvil joined #gluster
22:11 _pol joined #gluster
22:19 andreask joined #gluster
22:24 robo joined #gluster
22:31 fidevo joined #gluster
22:47 spligak joined #gluster
22:49 jporterfield joined #gluster
22:49 shapemaker joined #gluster
22:53 spligak when mounting a volume, I'm getting this error: [glusterfsd-mgmt.c:1655:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/vol0)
22:53 spligak is there a limit to the number of times you can mount a volume?
22:53 spligak practical limit or otherwise, I guess.
22:59 spligak I think something else is up. When I try and run a "volume sync vol0" on a host I just rebooted, I get "volume sync: failed: vol0, is not a friend"
22:59 jporterfield joined #gluster
22:59 spligak my gluster admin foo is weak, clearly.
23:00 spligak I always considered vol0 a friend. A good, kind volume.
23:03 tjstansell volume sync takes a hostname as an argument, not a volume name.
23:04 tjstansell how are you trying to mount your volume?
23:05 spligak mount -t glusterfs gluster-01:/vol0 /mnt/vol0a  -- that sort of thing. found something else, though
23:05 tjstansell you should check 'gluster peer status' and 'gluster volume status' to see if those show issues.
23:05 spligak the bricks on the host that just came back up are not reporting as being online
23:06 spligak yeah, the status is reporting the bricks on that host are offline
23:07 spligak so there are 3 hosts. gluster-0{0,1,2}  -- 1 & 2 are fine, 0 is reporting all 6 of its bricks offline.
23:08 spligak this is after a normal reboot. I'm assuming I'm missing something trivial
23:08 tjstansell the bricks exist, right? :)
23:08 tjstansell you can try 'gluster volume start <vol> force' to see if it can restart that brick process
23:08 spligak I uh.
23:09 spligak if we could never speak of this again.
23:09 tjstansell or look in /var/log/glusterfs/bricks for clues
23:09 spligak that'd be great.
23:09 spligak they aren't mounted :(
23:09 tjstansell heh ;)
23:09 spligak so when I mount them, it should just pick them right back up? or is there something I should run? a sync?
23:12 tjstansell after they're mounted, you'll need to start each volume with the force command to get it to restart those brick processes.
23:12 tjstansell since the volumes are technically already started.
23:12 spligak okay. excellent. thanks for your help! :)
23:12 tjstansell yep.
23:13 spligak to my earlier question - is there any practical limit to the number of times you can mount a volume?
23:14 tjstansell i don't know that one....
23:14 tjstansell does anyone here know what's required to mount iso images hosted on glusterfs safely?
23:15 tjstansell i found an entry in a FlusterFS_Technical_FAQ page that says it should be mounted with directio disabled...
23:15 tjstansell but that talks about using a glusterfs -f <your_spec_file> ... command which doesn't seem current...
23:15 tjstansell but i really don't know.
23:20 tjstansell i ask because we have a replica 2 cluster that mounts OS iso files and we've had an issue where rebooting one node will cause i/o errors on the other node while trying to access the iso contents.
23:20 tjstansell remounting it seems to clear it, but not sure why it's failing to begin with...
23:21 tjstansell it's almost like the node that got rebooted contained some sort of state/lock info that the other node then lost...
23:21 tjstansell any thoughts?
23:27 vpshastry joined #gluster
23:36 vpshastry1 joined #gluster
23:50 robo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary