Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 sprachgenerator joined #gluster
00:10 msciciel joined #gluster
00:12 sprachgenerator joined #gluster
00:30 TrDS left #gluster
00:38 DV__ joined #gluster
01:29 diegows joined #gluster
02:00 kevein joined #gluster
02:04 psyl0n joined #gluster
02:10 dbruhn joined #gluster
02:12 khushildep joined #gluster
02:17 johnbot1_ joined #gluster
02:30 hagarth joined #gluster
02:43 TheDingy joined #gluster
02:46 nocturn joined #gluster
02:51 kshlm joined #gluster
02:58 Reikoshea joined #gluster
03:08 _Bryan_ joined #gluster
03:24 bharata-rao joined #gluster
03:28 johnbot11 joined #gluster
03:33 shyam joined #gluster
03:45 itisravi joined #gluster
03:45 Reikoshea1 joined #gluster
03:49 RameshN joined #gluster
04:02 davinder joined #gluster
04:05 saurabh joined #gluster
04:11 kanagaraj joined #gluster
04:22 SpeeR joined #gluster
04:32 ppai joined #gluster
04:39 ndarshan joined #gluster
04:46 dusmant joined #gluster
04:47 eshy i'm am testing glusterfs 3.4.2 and creating a replicated volume across two machines on a local network, and i'm experiencing pausing when intentionally removing network connectivity to a node, while downloading a large file on the other node
04:48 eshy more specifically, i am wget'ing an ISO file, and it appears to pause when i remove a block. i'm sure this is at least somewhat expected, i am however curious as to what sort of delays i should be seeing
04:48 eshy and by removing a block, i mean a node/brick from the cluster ;-)
04:48 eshy i am new to this terminology.
04:48 samppah eshy: you arr probably hitting network.ping-timeout which is 42 seconds by defauly
04:49 glusterbot New news from newglusterbugs: [Bug 969461] RFE: Quota fixes <https://bugzilla.redhat.com/show_bug.cgi?id=969461>
04:49 samppah it waits for 42 seconds before it considers node dead
04:49 eshy samppah: ah, that does make sense.
04:49 samppah and blocks io durin thst time
04:49 eshy thanks. i'll adjust that parameter across my nodes and test again :)
04:49 samppah great :)
04:53 eshy samppah: so this configuration directive is set on a per-volume basis, presumably using the command 'gluster volume set vol_name network.ping-timeout 42', is there a keyword to use in the place of 'set' to check for what these options are set to?
04:54 samppah eshy: gluster volume set help if i remember correctly
04:54 samppah @undocumented
04:54 glusterbot samppah: I do not know about 'undocumented', but I do know about these similar topics: 'undocumented options'
04:54 samppah @undocumented options
04:54 glusterbot samppah: Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
04:55 eshy that's done it
04:55 jporterfield joined #gluster
04:55 samppah also that page is worth looking for
04:57 eshy results: i've successfully changed the ping timeout, which has been reflected in my testing
04:58 eshy thanks for that, and thanks for the link too, will be bookmarking that one ;-)
05:00 samppah np :)
05:01 AndroUser2 joined #gluster
05:04 kdhananjay joined #gluster
05:04 johnbot11 joined #gluster
05:18 prasanth joined #gluster
05:19 satheesh joined #gluster
05:23 nshaikh joined #gluster
05:24 jporterfield joined #gluster
05:25 lalatenduM joined #gluster
05:27 bala joined #gluster
05:30 mohankumar__ joined #gluster
05:30 psharma joined #gluster
05:31 shylesh joined #gluster
05:38 satheesh joined #gluster
05:43 johnbot11 joined #gluster
05:53 kshlm joined #gluster
05:59 spandit joined #gluster
05:59 vpshastry joined #gluster
06:01 raghu joined #gluster
06:11 hagarth joined #gluster
06:13 CheRi joined #gluster
06:14 vimal joined #gluster
06:14 Shri joined #gluster
06:26 satheesh joined #gluster
06:26 jporterfield joined #gluster
06:26 overclk joined #gluster
06:31 davinder joined #gluster
06:32 ababu joined #gluster
06:33 jporterfield joined #gluster
06:35 badone joined #gluster
06:38 bala1 joined #gluster
06:43 ngoswami joined #gluster
06:51 andy___ joined #gluster
06:51 fidevo joined #gluster
06:51 92AAAP31Y joined #gluster
06:52 ababu joined #gluster
06:53 kanagaraj joined #gluster
06:54 shyam joined #gluster
06:57 ndarshan joined #gluster
07:03 mohankumar joined #gluster
07:03 dusmant joined #gluster
07:05 ricky-ti1 joined #gluster
07:10 pk1 joined #gluster
07:13 rastar joined #gluster
07:14 RameshN joined #gluster
07:19 jtux joined #gluster
07:30 jtux joined #gluster
07:39 aravindavk joined #gluster
07:41 satheesh1 joined #gluster
07:44 ekuric joined #gluster
07:49 bala joined #gluster
07:50 kanagaraj joined #gluster
07:51 dusmant joined #gluster
07:51 ababu joined #gluster
07:51 ndarshan joined #gluster
08:08 Reikoshea joined #gluster
08:11 eseyman joined #gluster
08:12 s2r2_ joined #gluster
08:16 klaxa|work joined #gluster
08:18 meghanam joined #gluster
08:18 1JTABQ044 joined #gluster
08:24 wica joined #gluster
08:24 wica Hi, yes bot I know, but I have no question
08:27 samppah feel free to ask :)
08:27 wica lol
08:28 hagarth does glusterbot scare folks so much? :)
08:28 wica Yes,
08:28 wica ;p
08:28 hagarth lol
08:34 RameshN joined #gluster
08:43 andreask joined #gluster
08:44 keytab joined #gluster
08:46 sahina joined #gluster
08:49 wica btw, qemu with glusterfs support works also on gluster-3.2 bricks :)
08:51 franc joined #gluster
08:56 dusmant joined #gluster
08:57 samppah yeah, beware the dragons though :)
08:58 samppah afaik there might be some functions missing from 3.2
08:59 wica samppah: I just made a img on the wrong volume, that why :)
09:01 mgebbe_ joined #gluster
09:08 Reikoshea1 joined #gluster
09:08 eseyman joined #gluster
09:09 blook joined #gluster
09:11 1JTABQ044 joined #gluster
09:11 ababu joined #gluster
09:11 shylesh joined #gluster
09:13 zapotah joined #gluster
09:13 zapotah joined #gluster
09:14 shylesh joined #gluster
09:14 ababu joined #gluster
09:14 meghanam_ joined #gluster
09:22 sinatributos joined #gluster
09:25 jporterfield joined #gluster
09:29 sahina joined #gluster
09:32 rastar joined #gluster
09:36 F^nor joined #gluster
09:36 dusmant joined #gluster
09:38 sinatributos Hi. Any literature about performance issues on gluster-rdma?
09:39 sinatributos I have a d-r gluster filesystem, if I mount it with rdma perf is about half than if I mount it via tcp.
09:41 purpleidea sinatributos: ben england gave one talk about performance, i can point you to it, but i don't know if it said anything specific about rdma in particular.
09:42 purpleidea my guess is that this isn't normal.
09:42 purpleidea what gluster v are you using?
09:43 shylesh joined #gluster
09:43 kanagaraj joined #gluster
09:44 sinatributos hi purpleidea
09:45 sinatributos I'm on 3.3.2
09:45 sinatributos if you can point me to this talk it will be fine, even if it does not talk about rdma
09:46 hagarth joined #gluster
09:50 satheesh joined #gluster
09:57 ells joined #gluster
10:00 sahina joined #gluster
10:03 rastar joined #gluster
10:07 ells joined #gluster
10:21 satheesh1 joined #gluster
10:25 ira joined #gluster
10:31 hagarth joined #gluster
10:34 badone joined #gluster
10:55 ascii0x3F joined #gluster
11:02 purpleidea sinatributos: i think it's http://rhsummit.files.wordpress.com/2012/03/england-rhs-performance.pdf
11:03 ppai joined #gluster
11:05 navid__ joined #gluster
11:19 andreask joined #gluster
11:19 sinatributos purpleidea: thanks
11:20 badone joined #gluster
11:26 purpleidea yw
11:35 pk1 joined #gluster
11:36 spandit joined #gluster
11:40 vpshastry joined #gluster
11:42 TvL2386 joined #gluster
11:44 shylesh joined #gluster
11:46 badone joined #gluster
11:51 ppai joined #gluster
11:52 kkeithley1 joined #gluster
12:08 itisravi joined #gluster
12:13 rastar joined #gluster
12:14 edward2 joined #gluster
12:16 CheRi joined #gluster
12:20 khushildep joined #gluster
12:20 ppai joined #gluster
12:21 hagarth joined #gluster
12:22 raghug joined #gluster
12:33 Staples84 joined #gluster
12:38 nsoffer joined #gluster
12:39 nsoffer Hi all - there is a problem with the epel-6Server - when I try to update from it, I get this error:
12:40 nsoffer http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6Server/x86_64/repodata/50c54d92fedeba9dcb9c970f1c070e4c7a0d05e363a77ff013a7ea4ffb53e0ac-other.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
12:40 kkeithley_ let me check
12:40 nsoffer Looks like repomd.xml is broken
12:41 nsoffer epel-6 is ok
12:41 kkeithley_ epel-6Server is just a symlink to epel-6, so....
12:41 kkeithley_ hmm
12:47 kkeithley_ well, repomd.xml has  <location href="repodata/9a9ca601791b9ae583b2262b712c6907dce3aabbd59db0a2e559cd65be7d0a0a-other.sqlite.bz2"/> (i.e. not what you're seeing), and that file is in the repodata.
12:49 nsoffer kkeithley, how can you explain that using the url ...epel-6... works, but ...epel-6Server... does not?
12:50 kkeithley_ something somewhere is caching something old? I'm on the web server looking at the files
12:50 nsoffer maybe I get old repodm.xml?
12:51 kkeithley_ maybe. Try `yum clean all` first, then try again?
12:51 nsoffer trying
12:53 gmtech_ joined #gluster
12:53 gmtech_ HI all
12:53 gmtech_ Does anyone know if anybody offers enterprise support for GlusterFS ?
12:53 kkeithley_ Red Hat does
12:54 kkeithley_ Red Hat Storage (RHS)
12:54 gmtech_ Does that require you running red hat systems though? Am running Ubuntu
12:54 gmtech_ (thanks BTW)
12:55 khushildep_ joined #gluster
12:55 kkeithley_ The servers have to be RHS (which is RHEL). You can use Ubuntu clients, and last I knew you can use NFS and that's supported
12:58 diegows joined #gluster
13:00 vpshastry joined #gluster
13:01 gmtech_ Would red hat support the ubuntu clients as well? I guess I'd have to ask them lol
13:01 pk1 left #gluster
13:03 gmtech_ Ah no - they only support RedHat using the FUSE client
13:08 nsoffer kkeithley, after yum clean all, the epel-6Server does work - thanks!
13:08 kkeithley_ yw
13:10 plarsen joined #gluster
13:10 polfilm joined #gluster
13:13 badone joined #gluster
13:19 ppai joined #gluster
13:19 CheRi joined #gluster
13:24 dusmant joined #gluster
13:30 abyss^ jclift: you pasted link to odp document "GlusterFS fo SysAdmins", is there video somewhere or only that document?:)
13:35 hagarth abyss^: http://www.youtube.com/watch?v=HkBndZOcEA0
13:35 glusterbot Title: Demystifying Gluster - GlusterFS For SysAdmins - YouTube (at www.youtube.com)
13:37 jclift abyss^ hagarth: Cool.  Dustin Black's one is probably better. :)
13:38 jclift I don't think mine was recorded. :/
13:38 hagarth jclift: your session with the CentOS folks should be available
13:39 jclift hagarth: Yeah, it is but it's miserable.  Laptop ran out of power 1/2 way through (grr Fedora!) so I was basically screwed from there. :(
13:39 hagarth jclift: ah ok.
13:40 sroy joined #gluster
13:42 klaxa|work left #gluster
13:42 rwheeler joined #gluster
13:43 abyss^ thx guys
13:49 dblack joined #gluster
13:51 bennyturns joined #gluster
13:51 s2r2_ joined #gluster
13:54 vpshastry left #gluster
14:05 jskinner_ joined #gluster
14:07 blook joined #gluster
14:17 B21956 joined #gluster
14:27 dbruhn joined #gluster
14:31 flrichar joined #gluster
14:40 japuzzo joined #gluster
14:42 s2r2_ joined #gluster
14:44 theron joined #gluster
14:47 theron joined #gluster
14:48 s2r2_ joined #gluster
14:57 nshaikh joined #gluster
15:06 blook joined #gluster
15:09 bugs_ joined #gluster
15:10 kmai007 joined #gluster
15:21 getup- joined #gluster
15:22 jobewan joined #gluster
15:31 jporterfield joined #gluster
15:32 lpabon joined #gluster
15:39 nsoffer left #gluster
15:48 abhi_ joined #gluster
15:52 hagarth @fileabug
15:52 glusterbot hagarth: Please file a bug at http://goo.gl/UUuCq
15:52 hagarth glusterbot: ty
16:03 abhi_ Hello All, I am trying to add a brick after a failed replace brick operation but gluster says "volume add-brick: failed: Replace brick is in progress on volume storage. Please retry after replace-brick operation is committed or aborted"
16:04 abhi_ gluster volume status storage says "Replace brick    965756d3-953b-4eb8-bf51-da8a8740d7a8      completed" in the end under tasks
16:05 abhi_ how do I proceed furthere
16:05 JoeJulian gluster volume $vol replace-brick $source_brick $destination_brick commit
16:05 JoeJulian er...
16:05 JoeJulian gluster volume replace-brick $vol $source_brick $destination_brick commit
16:05 benjamin__ joined #gluster
16:06 abhi_ so you want me to redo the replace brick operation?
16:06 JoeJulian no, commit it.
16:06 abhi_ ohh, I had actually aborted it at one point
16:06 JoeJulian ah
16:06 abhi_ I know I screwed it
16:07 JoeJulian but it says completed... hrm..
16:07 abhi_ and these are ec2 instances, so the peer isn;t there
16:07 JoeJulian So which one /is/ there?
16:07 abhi_ I created two new instances so I can just add bricks
16:07 abhi_ the faulty source is still there
16:08 JoeJulian and if you try to abort again?
16:09 abhi_ JoeJulian: I am a little lost there. I have created lots of instances in between. I do not have the host of the <new> peer with me. Is there any other way around
16:10 davinder JoeJulian
16:10 abhi_ JoeJulian http://pastebin.com/EjZMSycX
16:10 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:11 davinder JoeJulian: I want to create 10 servers on KVM with GFS file system on local disk not on SAN .... Will it provide me redundancy
16:11 davinder ?
16:11 JoeJulian Well that doesn't tell us much, does it. :/
16:11 s2r2_ joined #gluster
16:12 JoeJulian davinder: I don't know much about GFS. I last tried it about 5 years ago.
16:12 davinder sorry GlusterFS
16:13 abhi_ JoeJulian: I know. How do I get out of this mess. Where can I see the list of tasks that gluster thinks is pending
16:15 davinder JoeJulian: Sorry glusterFS
16:18 davinder Will glusterfs will provide the redundancy over KVM ... with physical disks not SAN ?
16:26 rotbeard joined #gluster
16:36 wushudoin joined #gluster
16:37 zerick joined #gluster
16:48 gork4life I'm wondering if gluster can configure vipa
16:49 samppah davinder: you can use local disks (directories) to create gluster and use distributed-replicated setup for redundancy
16:49 JoeJulian ... sorry all, had a work call come in and will be on it for a while...
16:50 samppah gork4life: what do you mean with vipa?
16:50 gork4life samppah: virtual ip addresses
16:50 samppah gork4life: ahh, you need another software for that
16:51 polfilm joined #gluster
16:51 gork4life samppah: ok thanks
16:51 davinder samppah: Thanks .... if any server goes down then VM will move to another system ? how I will to know on which system it is running
16:52 abhi_ JoeJulian: Where can I see the list of tasks that gluster that are pending
16:53 samppah davinder: you need some software to manage/monitor those vm's.. rhev/ovirt is using KVM and has support for glusterfs
16:54 davinder Thanks samppah:
16:54 samppah davinder: i need to go away for a momemnt, i'll be back later if you need more information :)
16:55 davinder okay
16:57 johnbot11 joined #gluster
16:59 vpshastry joined #gluster
17:07 abhi_ I am getting this when I heal the volume "Commit failed on 10.71.4.54. Please check the log file for more details."
17:07 abhi_ which  log file should I check
17:10 psyl0n joined #gluster
17:11 vpshastry left #gluster
17:12 blook joined #gluster
17:26 gork4life abhi: I believe it's /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
17:30 Mo__ joined #gluster
17:35 andreask joined #gluster
17:36 ells joined #gluster
17:37 thefiguras joined #gluster
17:40 kaptk2 joined #gluster
17:41 japuzzo_ joined #gluster
17:41 Peanut I'm just reading up on the mailinglist - it seems that the Ubuntu live-migration bug is also affecting someone else :(
17:42 JoeJulian I saw that one.... on a /31 network. That's just asking for breakage. One address is the network address, the other's the broadcast address.
17:42 wushudoin left #gluster
17:43 Peanut JoeJulian: no, this is a well supported setup, from RFC3021. And it worked perfectly for 3.4.0 for months
17:43 Peanut And that was my mail - I meant that there is someone else who has this problem, too.
17:43 JoeJulian Ah
17:43 JoeJulian Couldn't match the name with the nick... :)
17:46 Peanut The clusters ONLY talk to each other over the interconnect, as far as I know - they have an entry in /etc/hosts for each other so they always end up with the addresses on the 10G.
17:47 ells joined #gluster
17:48 abhi_ how do I recover from a botched replace brick operation
17:49 abhi_ my new server is no longer accessible and refuses to perform any task unless the replace operation completes
17:51 diegows joined #gluster
17:51 abhi_ I found out the task details from gluster volume status all --xml but when I do a status on the replace-brick, it complains of a missing brick
17:52 dbruhn joined #gluster
17:53 ells left #gluster
17:53 ells joined #gluster
17:53 ells left #gluster
17:54 ells joined #gluster
17:58 JoeJulian abhi_: You're not the same guy I was talking to about this yesterday, are you?
17:58 abhi_ yes
17:58 JoeJulian Oh...
17:58 abhi_ I mean an hour ago
17:59 abhi_ not yesterday
17:59 JoeJulian You're not also abyss^ then.
17:59 abhi_ no
17:59 JoeJulian Same exact problem.
17:59 abhi_ first time here
17:59 abhi_ glad to hear that
17:59 abhi_ at least I am not alone
18:00 abhi_ I did a sync btw and the server bricks when offline
18:00 vpshastry joined #gluster
18:00 abhi_ not sure if that would help
18:01 JoeJulian If I can solve this networking issue I'm in the middle of, I'll try to duplicate the problem.
18:01 abhi_ JoeJulian: http://paste.ubuntu.com/6804153/
18:01 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
18:02 JoeJulian Could I see /var/lib/glusterd/vols/storage/info too?
18:02 s2r2_ joined #gluster
18:03 abhi_ hold on
18:04 abhi_ JoeJulian: http://paste.ubuntu.com/6804166/
18:04 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
18:04 abhi_ JoeJulian: 241 is the buggy server I want to replace. But I ran the replace brick on 240 instead.
18:05 abhi_ JoeJulian I have added two new servers to replace with
18:05 rotbeard joined #gluster
18:05 JoeJulian Ok, let's see if it's any different on 240 then.
18:05 abhi_ JoeJulian: 3.0/16 and 4.0/16 are in two different datacenters
18:06 abhi_ JoeJulian: this is from 240
18:06 abhi_ do you want from 241
18:06 JoeJulian yes please
18:07 abhi_ JoeJulian: http://paste.ubuntu.com/6804181/
18:07 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
18:08 abhi_ they look the same
18:09 JoeJulian wait... 241 isn't in there
18:09 JoeJulian on the status page I mean
18:10 abhi_ JoeJulian: http://paste.ubuntu.com/6804185/ this is the xml output of the task
18:10 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
18:11 abhi_ JoeJulian: I am not sure . brick 0 and 2 are from 241
18:11 JoeJulian Can you have any down time?
18:12 abhi_ well this is not production per se but others are working on it. It's right in the middle of it
18:12 abhi_ but please tell me wha to do
18:13 abhi_ I am just worried if I stop and start the volume, it will never start again
18:13 abhi_ JoeJulian: by status you mean the xml dump? yes, I mistakenly replace 240 instead of 241 and then tried takin 241 down.
18:14 JoeJulian I'm testing a couple theories... just a sec...
18:18 JoeJulian Oh, that's wierd, "All replace-brick commands except commit force are deprecated. Do you want to continue?" in 3.4.2. I thought that was only for replicated volumes.
18:20 abhi_ yes I get that too
18:20 JoeJulian /var/lib/glusterd/vols/$vol/rbstate has that replace-brick task.
18:20 abhi_ I am running 342
18:20 abhi_ one sec
18:21 abhi_ yes it does
18:21 abhi_ rb_status=1
18:21 abhi_ rb_src=10.71.3.240:/storage/xvdb1/storage
18:21 abhi_ rb_dst=10.71.3.176:/storage/xvdb/storage
18:21 abhi_ rb_port=0
18:21 abhi_ replace-brick-id=965756d3-953b-4eb8-bf51-da8a8740d7a8
18:21 abhi_ I am running a distributed replicated volume
18:22 abhi_ I tried doing a gluster volume heal storage full it said "Launching Heal operation on volume storage has been unsuccessful"
18:25 JoeJulian then the brick vol file also has changes wrt replace-brick, namely the replace-brick and pump sections.
18:26 khushildep joined #gluster
18:26 JoeJulian With glusterd stopped (bricks can remain up), if the rb_state file was replaced with only the one line, "rb_status=0" and the vol file was edited to remove those sections and point the subvolume for the ${vol}-index section to ${vol}-io-threads (ensuring those changes are copied to all servers), I think that should remove the replace-brick state and get that one brick back operating again.
18:27 abhi_ I am in the brick now
18:27 JoeJulian It worked in my little test scenario.
18:27 abhi_ I stopped the service sudo service glusterfs-server stop
18:28 abhi_ so should I edit the restate file to remove all but the first line to rb_status =0?
18:29 abhi_ *rbstate
18:29 JoeJulian correct (no space)
18:30 abhi_ JoeJulian: did that. where is this vol file ?
18:32 JoeJulian $vol.$server.$brick-path( / are replaced with - ).vol
18:35 Reikoshea joined #gluster
18:35 zaitcev joined #gluster
18:39 abhi_ JoeJulian: checked the file . they all have the storage-index with storage-io-threads
18:40 abhi_ but now I am unable to start the service
18:40 JoeJulian unable to start glusterd?
18:40 JoeJulian /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
18:40 abhi_ JoeJulian: http://paste.ubuntu.com/6804328/
18:40 abhi_ yes
18:40 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
18:41 JoeJulian bah...
18:41 JoeJulian glusterd --debug
18:41 JoeJulian paste that
18:43 abhi_ JoeJulian: http://paste.ubuntu.com/6804337/
18:43 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
18:43 tqrst joined #gluster
18:43 tqrst joined #gluster
18:43 tqrst joined #gluster
18:45 rotbeard joined #gluster
18:46 tqrst everything was going so well
18:46 tqrst anyone else seeing segfaults in anything related to glustershd in 3.4.2?
18:46 JoeJulian On .241 in /var/lib/glusterd/glusterd.info there's a UUID.
18:46 JoeJulian tqrst: not that I've heard about.
18:47 JoeJulian avati usually likes to see a backtrace from the core dump when you file a bug report on that.
18:47 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:47 abhi_ JoeJulian: UID=b2d8982c-e812-4e06-b3c7-0c364aa59aef
18:48 tqrst yeah, still sorting through the dozens of core dump files to see if there's a pattern
18:48 JoeJulian abhi_: create a file on the rest of the servers: /var/lib/glusterd/peers/b2d8982c-e812-4e06-b3c7-0c364aa59aef
18:49 tqrst it's all in vfprintf as far as I can see
18:50 JoeJulian It's contents should be http://paste.ubuntu.com/6804368/
18:50 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
18:52 tqrst size = vsnprintf (NULL, 0, format, arg); at mem-pool.c:193
18:52 tqrst interesting
18:54 glusterbot New news from newglusterbugs: [Bug 1054199] gfid-acces not functional during creates under sub-directories. <https://bugzilla.redhat.com/show_bug.cgi?id=1054199>
18:54 swaT30 @JoeJulian just getting around to looking at performing that rebalance on my replicated cluster. is that something that can be performed while nodes are all up and in production safely?
18:55 JoeJulian Absolutely! (std disclaimer applies)
18:56 abhi_ JoeJulian: brilliant, it started!!
18:56 abhi_ JoeJulian: what's next
18:57 JoeJulian check peer status on that one and one other.
18:57 abhi_ I mean I finally wanted to get rid of that node
18:57 abhi_ they are all up
18:57 JoeJulian whew
18:57 abhi_ :)
18:57 abhi_ I will try the add brick now
18:57 abhi_ JoeJulian: volume add-brick: failed:
18:59 JoeJulian since this is a replicated volume, the replace-brick command can be just "replace brick $vol $source $dest commit force" followed by a "heal $vol full". Replication will manage populating the new brick.
19:00 abhi_ JoeJulian: why is this "Brick 10.71.3.240:/storage/xvdb1/storageN/AY4930"
19:00 abhi_ the port is N/A
19:00 swaT30 @JoeJulian ok thanks! just to cover all details, we're running KVM with qcow images off this Gluster volume
19:00 abhi_ shouldn't that be 49153 or something
19:01 JoeJulian abhi_: yes...
19:01 JoeJulian Looks like you can kill that brick safely (you have another replica) so identify which glusterfsd process that is, kill it, then gluster volume start $vol force
19:01 JoeJulian brb...
19:05 [o__o] left #gluster
19:06 B21956 joined #gluster
19:07 [o__o] joined #gluster
19:10 [o__o] left #gluster
19:12 [o__o] joined #gluster
19:24 glusterbot New news from newglusterbugs: [Bug 1049981] 3.5.0 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1049981>
19:36 [o__o] left #gluster
19:39 [o__o] joined #gluster
19:53 rprice joined #gluster
19:53 ells joined #gluster
19:54 kouprianov joined #gluster
20:02 TrDS joined #gluster
20:05 johnmark and we'r elive!  http://www.gluster.org/2014/01/gluster-hangout-james-shubin/
20:05 Jayunit100 joined #gluster
20:06 Jayunit100 ha he almost said "puppet monster" !
20:08 rwheeler joined #gluster
20:08 Jayunit100 yay vagrant !
20:11 kmai007 joined #gluster
20:11 Jayunit100 https://forge.gluster.org/vagrant/fedora19-gluster/trees/master/gluster-hbase-example <-- for those interested in gluster/vagrant + bigdata …. hbase on gluster in a vagrant image.  johnmark
20:11 glusterbot Title: Tree for fedora19-gluster in Vagrant - Gluster Community Forge (at forge.gluster.org)
20:21 chirino joined #gluster
20:21 s2r2__ joined #gluster
20:21 johnmark Jayunit100: if you have questions, ask in #gluster-meeting
20:22 davinder joined #gluster
20:26 khushildep joined #gluster
20:28 [o__o] left #gluster
20:30 [o__o] joined #gluster
20:32 diegows joined #gluster
20:38 LoudNoises joined #gluster
20:45 purpleidea puppet monster? hehe
20:45 purpleidea Jayunit100: no! those vagrant links aren't what i was referring to. this is: https://github.com/purpleidea/puppet-gluster/blob/master/vagrant/gluster/Vagrantfile
20:45 glusterbot Title: puppet-gluster/vagrant/gluster/Vagrantfile at master · purpleidea/puppet-gluster · GitHub (at github.com)
20:45 purpleidea @vagrant
20:45 glusterbot purpleidea: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @ https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/,
20:45 glusterbot purpleidea: or (#5) https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/
20:49 bennyturns joined #gluster
20:50 jag3773 joined #gluster
20:53 badone joined #gluster
21:16 davinder joined #gluster
21:18 KyleG joined #gluster
21:29 RedShift joined #gluster
21:37 khushildep joined #gluster
21:38 badone joined #gluster
21:40 thefiguras any recommendation how many bricks can run per GB of RAM?
21:42 thefiguras it does not looks like glusterfds needs much of RAM.
21:43 purpleidea thefiguras: i think it depends on a lot of other factors too.. size of bricks, etc...
21:43 purpleidea number of clients, load, etc...
21:43 purpleidea thefiguras: at the end of the day, you should probably test realistic loads of your application to know for sure.
21:45 thefiguras right. so there is no rule of thumb. makes sense
21:47 khushildep_ joined #gluster
21:50 madphoenix joined #gluster
21:51 madphoenix what is the current recommended practice for handling heterogeneous brick sizes?  last I read up on it, people hesitantly recommended setting min-free-disk, is that still the case?
21:52 semiosis the recommended practice is to have bricks all the same size :/
21:55 madphoenix i was afraid of that
21:56 madphoenix how strongly discouraged is min-free-disk?  are there other ways to address the issue?
21:59 semiosis you really dont want your bricks to fill up
21:59 semiosis just stay ahead of that and you'll be fine
22:00 semiosis remember, all gluster can do is avoid placing a *new* file on a brick that's nearly full
22:00 semiosis if an existing file continues to grow it can fill a brick and gluster can't do anything about htat
22:01 semiosis imo the only way to solve that is with good disk usage monitoring & a plan to expand bricks using LVM or similar
22:01 semiosis or adding bricks & rebalancing, but thats even more difficult
22:01 semiosis s/difficult/time consuming/
22:01 glusterbot What semiosis meant to say was: or adding bricks & rebalancing, but thats even more time consuming
22:01 madphoenix adding bricks and rebalancing is what i'm looking at.  my bricks are filling, but with new files, not existing ones
22:03 thefiguras I am using method to expand bricks using LVM, so far without any issues. Agree with semiosis
22:03 madphoenix can you expand on what you mean by that?
22:03 madphoenix expanding bricks using lvm, that is
22:03 madphoenix are you adding a JBOD chassis to your existing servers or something?
22:06 thefiguras I have not used JBOD so cant say. I have 20TB LVM VG and keep extending LVs as needed.
22:07 madphoenix so you didn't just provision the whole 20TB as a single brick?
22:08 thefiguras no, I am still in development stage for gluster. Nonetheless, I wont need more then a few TB anyway so LVM and mine scenario should be good for me.
22:09 madphoenix ah, ok completely different situation.  i have a volume comprised of five servers with one 20TB brick per server, and they're nearly full.  i want to expand it, but it's much more cost effective these days to use 12x4TB disks instead of 12x2TB (which the existing servers are)
22:09 madphoenix i suppose since i'm not using replication, i could just create two 20TB bricks on the new servers
22:09 madphoenix and start a rebalance
22:10 madphoenix but still ... i'm curious why there aren't other methods for using different size bricks, seems like it is quite easy to get into an intractable situation if you were using replication, since you wouldn't want multiple bricks on the same server in that situation for availability reasons
22:10 madphoenix unless your replication level was higher than the number of bricks attached to any one server
22:14 thefiguras adding another server and rebalance seems to be logical approach if possible. I am testing only with replication so I can afford take down node and move or rebuild. However, doing it with many TBs as you have , it will be network expensive operation.
22:19 madphoenix we have 2x10Gbe between all of our bricks so hopefully it won't be too bad
22:19 madphoenix i guess i'm just a little surprised that there isn't a supported volume expansion path other than adding more of the same size brick
22:20 thefiguras your setup its beyond mine imagination :)
22:20 madphoenix it's good to be on a university campus ;
22:20 thefiguras haha, i am in private sector - of course
22:46 badone_ joined #gluster
22:52 anande joined #gluster
22:56 gdubreui joined #gluster
22:58 gdubreui joined #gluster
23:06 purpleidea semiosis: which option was it that you wanted patched?
23:38 klaas joined #gluster
23:45 dneary joined #gluster
23:52 MacWinner joined #gluster
23:56 jansel joined #gluster
23:59 ells joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary