Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 jobewan joined #gluster
01:10 jporterfield joined #gluster
01:21 glusterbot New news from newglusterbugs: [Bug 1028672] BD xlator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1028672>
01:26 vpshastry joined #gluster
01:40 r0b joined #gluster
01:41 bala joined #gluster
01:42 bharata-rao joined #gluster
01:42 Zylon joined #gluster
02:04 zwu joined #gluster
03:01 kshlm joined #gluster
03:01 overclk joined #gluster
03:10 zwu joined #gluster
03:39 RameshN joined #gluster
03:40 shubhendu joined #gluster
03:52 itisravi joined #gluster
03:53 kdhananjay joined #gluster
03:55 dhyan joined #gluster
04:01 MiteshShah joined #gluster
04:10 raghu joined #gluster
04:28 CROS_ left #gluster
04:34 rastar joined #gluster
04:40 badone joined #gluster
04:44 r0b joined #gluster
04:45 davinder joined #gluster
04:51 divbell joined #gluster
04:58 ndarshan joined #gluster
05:01 SFLimey_ joined #gluster
05:02 ppai joined #gluster
05:12 bala joined #gluster
05:23 jporterfield joined #gluster
05:24 CheRi joined #gluster
05:33 dusmant joined #gluster
05:35 jporterfield joined #gluster
05:36 saurabh joined #gluster
05:46 kanagaraj joined #gluster
05:46 psharma joined #gluster
05:48 MiteshShah joined #gluster
05:49 mohankumar__ joined #gluster
05:53 davinder joined #gluster
05:54 aravindavk joined #gluster
05:55 prasanth_ joined #gluster
05:57 satheesh joined #gluster
05:58 vimal joined #gluster
06:04 vpshastry joined #gluster
06:05 tor joined #gluster
06:17 lalatenduM joined #gluster
06:21 mkzero joined #gluster
06:36 jporterfield joined #gluster
06:43 dhyan joined #gluster
06:44 jporterfield joined #gluster
06:46 kevein joined #gluster
06:49 satheesh joined #gluster
06:53 davinder2 joined #gluster
06:55 satheesh joined #gluster
06:55 davinder joined #gluster
06:58 mkzero joined #gluster
06:59 davinder2 joined #gluster
07:01 ngoswami joined #gluster
07:15 DV joined #gluster
07:23 dusmant joined #gluster
07:41 ekuric joined #gluster
07:52 keytab joined #gluster
08:12 eseyman joined #gluster
08:21 ctria joined #gluster
08:24 kdhananjay joined #gluster
08:30 mkzero joined #gluster
08:38 eseyman joined #gluster
08:38 mgebbe_ joined #gluster
08:40 mgebbe_ joined #gluster
08:47 masterzen joined #gluster
08:53 shubhendu joined #gluster
08:53 kanagaraj joined #gluster
08:54 dusmant joined #gluster
08:54 ndarshan joined #gluster
09:01 dneary joined #gluster
09:07 tziOm joined #gluster
09:13 bala joined #gluster
09:20 satheesh joined #gluster
09:27 prasanth_ joined #gluster
09:32 msciciel_ joined #gluster
09:35 jporterfield joined #gluster
09:42 jporterfield joined #gluster
09:47 bolazzles joined #gluster
09:51 RedShift joined #gluster
09:52 jporterfield joined #gluster
09:56 nocturn Hi, I upgraded to Gluster 3.4 on Scientific Linux.  The packages seem now to been in the official repos, but I can't find the server packagae nor an init script to start the server.
09:56 nocturn Anyone know what is wrong?
10:00 samppah nocturn: (rh)el only ships with client, for server you still have to use community packages
10:01 samppah @latest
10:01 glusterbot samppah: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
10:05 kanagaraj joined #gluster
10:22 psharma joined #gluster
10:24 davinder joined #gluster
10:31 mkzero joined #gluster
10:37 RameshN joined #gluster
10:39 TvL2386 joined #gluster
10:42 clag_ joined #gluster
10:50 clag_ left #gluster
10:51 ndarshan joined #gluster
10:52 kanagaraj joined #gluster
10:52 vpshastry1 joined #gluster
10:53 vimal joined #gluster
10:54 dusmant joined #gluster
10:55 bala joined #gluster
10:57 prasanth_ joined #gluster
10:57 ppai joined #gluster
10:58 shubhendu joined #gluster
10:58 mkzero joined #gluster
10:59 qdk joined #gluster
11:06 satheesh joined #gluster
11:26 psyl0n joined #gluster
11:36 ppai joined #gluster
11:36 jporterfield joined #gluster
11:36 MiteshShah joined #gluster
11:36 vpshastry1 joined #gluster
11:38 cyberbootje Hi, i'm trying to recover a brick that is alive again i'm getting:
11:38 cyberbootje Brick 001:/st/sas
11:38 cyberbootje Number of entries: 0
11:38 cyberbootje Status: Brick is Not connected
11:42 jporterfield joined #gluster
11:46 mkzero joined #gluster
11:48 saurabh joined #gluster
11:51 kdhananjay joined #gluster
11:52 kkeithley joined #gluster
11:53 kkeithley joined #gluster
11:58 CheRi joined #gluster
12:04 itisravi_ joined #gluster
12:12 rastar joined #gluster
12:12 jporterfield joined #gluster
12:40 andreask joined #gluster
12:44 ppai joined #gluster
12:44 klaxa|work joined #gluster
12:45 klaxa|work hi, i have the following usecase: i want to increase the size of my underlying two bricks in a replica setup with glusterfs 3.3.2
12:46 klaxa|work my idea was to remove one brick at a time from the replica, since i'm trying to minimize downtime
12:46 klaxa|work removing one brick fails though
12:46 klaxa|work gluster> volume remove-brick storage replica 1 vsh06:/srv/glusterfs
12:46 klaxa|work Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
12:46 klaxa|work Remove Brick commit force unsuccessful
12:46 klaxa|work is what i get
12:47 klaxa|work which seems not too wrong, since you can't create a replica volume with 1 brick
12:47 yinyin joined #gluster
12:47 klaxa|work but how do i reduce the volume to a non-replica volume in that case?
12:49 prasanth_ joined #gluster
12:58 jporterfield joined #gluster
13:22 dhyan joined #gluster
13:25 lalatenduM klaxa|work, couple of suggestions
13:26 lalatenduM klaxa|work, 1st the remove brick command you are using is seems wrong
13:27 lalatenduM klaxa|work, the correct syntax is as " gluster volume remove-brick VOLNAME BRICK start"
13:28 lalatenduM klaxa|work, also I am not sure this what you exactly want to do as you said you want to increase the size of underlying two bricks. Do you have new disks now? if yes then we can replace the bricks
13:29 lalatenduM with add-brick and remove-brick command
13:31 cyberbootje i have an issue, i have an replicated 2 gluster storage and one of them went read only yesterday, no logging what so ever, anyone an idea?
13:34 lalatenduM cyberbootje, check the node's /var/log/glusterfs/etc-gluster* log, are you saying the volume is readonly?
13:35 cyberbootje lalatenduM: On the client or on the storage?
13:35 lalatenduM cyberbootje, is the mounted volume is readonly in client?
13:37 cyberbootje no
13:37 cyberbootje the second storage took over
13:37 cyberbootje first storage went complete R/O
13:38 lalatenduM cyberbootje, ok , got it. that means one brick out of the replica pair became read only, what is the on disk filesystem  on that brick?
13:39 cyberbootje EXT4
13:44 klaxa|work lalatenduM: i tried it with the start parameter too, no change
13:45 B21956 joined #gluster
13:46 yinyin joined #gluster
13:51 cyberbootje lalatenduM: i already checked the the raid controller, disks were not degraded
13:53 jag3773 joined #gluster
13:53 glusterbot New news from newglusterbugs: [Bug 977497] gluster spamming with E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused) when nfs daemon is off <https://bugzilla.redhat.com/show_bug.cgi?id=977497>
13:54 lalatenduM cyberbootje, ext4 had some issue older gluster versions. What is your gluster version
13:55 cyberbootje 3.3.1
13:56 Retenodus joined #gluster
13:56 Retenodus Hello
13:56 glusterbot Retenodus: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:57 hagarth joined #gluster
13:59 lalatenduM cyberbootje, I have to dig in to the email list to find out which build the ext4 issue got fixed. if the on disk file-system is ro because of some issue , it mostly not because of any gluster issue
14:00 cyberbootje lalatenduM: what kind of bug was that?
14:00 Retenodus Now, I have 2 hosts with one brick for each one. I configured the volume to be in Distribute/Replicate mode with groups of 2 bricks.What happen if I add a third host/brick in that mode ? (still with group of 2 bricks) ? I guess it won't be replicated, right ?
14:01 cyberbootje lalatenduM: because we also had numeros other issue's wehere the client rebooted for no reason at all(client is a VM host) then we just replaced all hardware and started all over
14:01 lalatenduM cyberbootje, xfs filesystem with inode size 512 is recommended for gluster bricks, I think is not related to your issue , it was resulting to some gluster issue. But in your case the issue seems to be with the on disk file system
14:02 lalatenduM cyberbootje, lets me check if I can get you the bug number
14:02 cyberbootje ok
14:02 cyberbootje lalatenduM: any clue what can cause a R/O ? other than hardware issue's....
14:03 cyberbootje lalatenduM: maby something in combination with guster?
14:03 lalatenduM cyberbootje, it should be some filesystem issue . you should run fsck on the brick partition
14:05 cyberbootje i did, that's not the problem
14:05 cyberbootje i just want to prevent it from ahppening again
14:06 lalatenduM cyberbootje, check this email http://gluster.org/pipermail/glus​ter-users/2012-August/011164.html
14:06 glusterbot Title: [Gluster-users] ext4 issue explained (at gluster.org)
14:09 lalatenduM cyberbootje, I will suggest you to send an email to gluster-users describing your issue , http://www.gluster.org/interact/mailinglists/
14:09 plarsen joined #gluster
14:11 lalatenduM klaxa|work, sorry I missed your reply. can you check all process are running in "gluster volume status" output
14:12 theron joined #gluster
14:14 psyl0n joined #gluster
14:15 lalatenduM klaxa|work, what is the version of gluster your are running? I will also suggest you to report the issue at gluster-users mailing list http://www.gluster.org/interact/mailinglists/
14:15 calum_ joined #gluster
14:15 techminer left #gluster
14:16 klaxa|work glusterfs 3.3.2, we have rescheduled things now though
14:18 lalatenduM klaxa|work, ok
14:27 vpshastry joined #gluster
14:30 yk joined #gluster
14:30 bennyturns joined #gluster
14:32 yk hi, i have a novice Q. regarding replication
14:32 yk if i have 2 servers with 4 bricks in each one
14:33 yk all bricks connected to same volume, if i build the volume using replica 2, does glusterfs knows how to split the bricks between the two servers
14:33 yk so if one fails i will not lose my data?
14:36 tor joined #gluster
14:37 vpshastry left #gluster
14:42 vipulnayyar joined #gluster
14:44 jag3773 joined #gluster
14:48 yk joined #gluster
14:48 vimal joined #gluster
14:49 ira joined #gluster
14:53 bala joined #gluster
14:53 glusterbot New news from newglusterbugs: [Bug 1047902] Possible small memory leak in rpcsvc_drc_init() <https://bugzilla.redhat.co​m/show_bug.cgi?id=1047902>
14:56 mkzero joined #gluster
15:05 ndevos yk: the order in which you create the 'replica 2' volume is important, you pass 'replica pairs of bricks' on the command line
15:05 ndevos yk: this would do it: gluster volume create MYVOL replica 2 server1:/bricks/1st-brick/data server2:/bricks/1st-brick/data server1:/bricks/2nd-brick/data server2:/bricks/2nd-brick/data
15:06 ekuric joined #gluster
15:08 jporterfield joined #gluster
15:08 johnmark howdidly holy
15:08 JMWbot johnmark: @3 purpleidea reminded you to: thank purpleidea for an awesome JMWbot (please report any bugs) [1542431 sec(s) ago]
15:08 JMWbot johnmark: @5 purpleidea reminded you to: remind purpleidea to implement a @harass action for JMWbot  [1471195 sec(s) ago]
15:08 JMWbot johnmark: @6 purpleidea reminded you to: get semiosis article updated from irc.gnu.org to freenode [1375725 sec(s) ago]
15:08 JMWbot johnmark: Use: JMWbot: @done <id> to set task as done.
15:08 ekuric joined #gluster
15:09 yk thanks ndevos
15:09 johnmark purpleidea: hrm... I have no way to remind you of updating the bot to 1. allow a harrass function for you and 2. make it more general purpose :)
15:13 jobewan joined #gluster
15:15 diegows joined #gluster
15:17 wushudoin joined #gluster
15:25 primechuck joined #gluster
15:39 chirino joined #gluster
15:39 mkzero joined #gluster
15:39 aravindavk joined #gluster
15:44 andreask joined #gluster
15:48 _BryanHM_ joined #gluster
15:51 dbruhn joined #gluster
15:57 zaitcev joined #gluster
16:03 zerick joined #gluster
16:07 purpleidea johnmark: i still have to post this code sorry! i've been busy! did you see my latest blog post? the puppet-gluster+vagrant stuff is almost done... working on two bugs :P
16:08 bennyturns joined #gluster
16:12 pk1 joined #gluster
16:12 daMaestro joined #gluster
16:29 pk1 left #gluster
16:32 ekuric left #gluster
16:35 social joined #gluster
16:41 johnbot11 joined #gluster
16:43 bolazzles joined #gluster
16:50 dewey joined #gluster
16:57 thogue joined #gluster
17:06 LoudNoises joined #gluster
17:08 pk2 joined #gluster
17:08 ^rcaskey joined #gluster
17:10 mkzero joined #gluster
17:20 pk2 left #gluster
17:31 thogue joined #gluster
17:34 social joined #gluster
17:37 flrichar joined #gluster
17:42 [o__o] joined #gluster
17:42 plarsen joined #gluster
17:43 social_ joined #gluster
17:43 InnerFIRE joined #gluster
17:44 InnerFIRE hello, I just upgraded from 3.1 to 3.4 and now I can't get glusterfs to mount.
17:50 mwillbanks left #gluster
17:52 dbruhn InnerFIRE, did you upgrade both the client and server packages?
17:55 [o__o] joined #gluster
17:58 diegows joined #gluster
18:00 tryggvil joined #gluster
18:00 vpshastry joined #gluster
18:02 social_ joined #gluster
18:02 MacWinner joined #gluster
18:06 psyl0n joined #gluster
18:07 InnerFIRE yes I did.  I think I have it down to this: 0-rpc-service: Auth too weak
18:08 Mo___ joined #gluster
18:16 divbell joined #gluster
18:24 vpshastry left #gluster
18:24 rotbeard joined #gluster
18:39 Retenodus joined #gluster
18:40 Retenodus_ joined #gluster
18:44 divbell joined #gluster
19:00 psyl0n joined #gluster
19:05 zaitcev joined #gluster
19:05 masterzen joined #gluster
19:10 tryggvil joined #gluster
19:10 InnerFIRE This is really weird, every google search I do shows "Auth too weak" as a version mismatch but I just checked and everything is running the same version
19:11 dbruhn Have you tried rebooting everything?
19:21 InnerFIRE not yet
19:26 vpshastry joined #gluster
19:31 calum_ joined #gluster
19:31 InnerFIRE ahah
19:31 InnerFIRE dbruhn: I rebooted the affected server and it works now
19:32 InnerFIRE I'll reboot the other two once my boss calms down
19:32 InnerFIRE thanks
19:32 vpshastry left #gluster
19:42 dbruhn InnerFIRE, np, sorry it was a hassle to deal with
19:43 dbruhn Been there, nothing like having a production system decide it doesn't like production.
19:43 InnerFIRE no worries, the cluster did survive with all of our data intact when one of the drives died and somehow corrupted it's twin
19:43 jag3773 joined #gluster
19:47 dbruhn You did an upgrade while fighting corruption?
19:51 InnerFIRE no
19:51 InnerFIRE the two drives were in the same node
19:52 InnerFIRE the other two systems were fine but I did an upgrade because the blasted thing wouldn't sync once I restored it
19:52 dbruhn ahh
19:55 thogue joined #gluster
19:56 JoeJulian InnerFIRE, dbruhn: That shouldn't happen in the future. There's been an ongoing argument about whether package upgrades /should/ automatically restart bricks (restarting glusterd doesn't restart the bricks). The conclusion of that argument is that it will automatically do that unless a configuration setting is set to prevent it.
19:57 JoeJulian What (probably) happened is that your brick was still running the old version because that process hadn't been restarted.
20:01 dbruhn JoeJulian, good to know.
20:05 InnerFIRE well
20:06 InnerFIRE that's the odd thing, I had upgraded the crashed node and it wouln't sync thanks to a version mismatch so I did the others
20:06 InnerFIRE and everything was synching again but the first machine couldn't mount anything
20:08 _feller joined #gluster
20:18 thinmint joined #gluster
20:26 zerick joined #gluster
20:30 InnerFIRE scratch that
20:30 InnerFIRE they only said they were synching
20:32 tryggvil joined #gluster
20:37 r0b joined #gluster
20:37 purpleidea johnmark: at your request, it's now published: https://github.com/purpleidea/jmwbot sorry for the delay and enjoy!
20:37 glusterbot Title: purpleidea/jmwbot · GitHub (at github.com)
20:38 purpleidea JMWbot: @about
20:38 JMWbot purpleidea: The JMWbot was written by @purpleidea. https://ttboj.wordpress.com/
20:42 psyl0n joined #gluster
20:47 psyl0n joined #gluster
20:48 synaptic joined #gluster
21:08 nueces joined #gluster
21:14 johnmark purpleidea: woohoo!
21:29 wushudoin joined #gluster
21:45 purpleidea johnmark: ;)
21:46 qdk joined #gluster
21:53 sroy__ joined #gluster
21:55 glusterbot New news from newglusterbugs: [Bug 996047] volume-replace-brick issues <https://bugzilla.redhat.com/show_bug.cgi?id=996047>
22:16 RicardoSSP joined #gluster
22:16 RicardoSSP joined #gluster
22:17 daMaestro joined #gluster
22:18 bala joined #gluster
22:24 theron joined #gluster
22:24 sticky_afk joined #gluster
22:24 divbell joined #gluster
22:24 stickyboy joined #gluster
22:38 badone joined #gluster
22:41 johnmwilliams joined #gluster
22:53 badone joined #gluster
23:06 badone joined #gluster
23:29 diegows joined #gluster
23:31 thogue joined #gluster
23:39 mkzero joined #gluster
23:45 NeatBasis joined #gluster
23:47 zerick joined #gluster
23:54 MacWinner joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary