Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 bene2 joined #gluster
00:26 sputnik13 joined #gluster
00:37 sputnik13 joined #gluster
00:38 haomaiwa_ joined #gluster
00:46 sputnik13 joined #gluster
00:47 marmalodak joined #gluster
00:54 chirino_m joined #gluster
00:56 chirino joined #gluster
00:57 chirino_m joined #gluster
01:05 hchiramm__ joined #gluster
01:18 chirino joined #gluster
01:23 jmarley joined #gluster
01:23 jmarley joined #gluster
01:24 gildub joined #gluster
01:26 andreask joined #gluster
01:27 chirino joined #gluster
01:37 Ark joined #gluster
01:41 prasanthp joined #gluster
01:42 bala joined #gluster
01:45 haomaiw__ joined #gluster
02:01 Thilam|work joined #gluster
02:03 sh_t joined #gluster
02:07 glusterbot New news from newglusterbugs: [Bug 1108958] run-tests.sh should warn on missing yajl <https://bugzilla.redhat.co​m/show_bug.cgi?id=1108958>
02:17 harish_ joined #gluster
02:19 davinder10 joined #gluster
02:38 jag3773 joined #gluster
02:54 rjoseph joined #gluster
02:57 dusmant joined #gluster
02:58 gildub joined #gluster
03:38 shubhendu joined #gluster
03:41 gmcwhistler joined #gluster
03:47 kanagaraj joined #gluster
03:50 ndarshan joined #gluster
03:53 kshlm joined #gluster
03:56 itisravi joined #gluster
04:11 rejy joined #gluster
04:12 kumar joined #gluster
04:22 bharata-rao joined #gluster
04:30 haomaiwang joined #gluster
04:33 haomai___ joined #gluster
04:41 psharma joined #gluster
04:51 nishanth joined #gluster
04:54 nbalachandran joined #gluster
04:55 saurabh joined #gluster
05:07 kdhananjay joined #gluster
05:08 ppai joined #gluster
05:09 Ark joined #gluster
05:10 shubhendu joined #gluster
05:13 dusmant joined #gluster
05:20 aravindavk joined #gluster
05:21 dusmant joined #gluster
05:24 kanagaraj joined #gluster
05:26 ramteid joined #gluster
05:27 kanagaraj joined #gluster
05:29 deepakcs joined #gluster
05:42 meghanam joined #gluster
05:43 bala joined #gluster
05:44 RameshN joined #gluster
05:46 rastar joined #gluster
05:52 dusmant joined #gluster
05:53 hagarth joined #gluster
05:53 aravindavk joined #gluster
05:56 rjoseph joined #gluster
06:00 glusterbot New news from resolvedglusterbugs: [Bug 958691] nfs-root-squash: rename creates a file on a file residing inside a sticky bit set directory <https://bugzilla.redhat.com/show_bug.cgi?id=958691>
06:17 karnan joined #gluster
06:17 rossi_ joined #gluster
06:19 shubhendu joined #gluster
06:27 kanagaraj_ joined #gluster
06:34 deepakcs joined #gluster
06:38 kanagaraj joined #gluster
06:40 aravindavk joined #gluster
06:42 hagarth joined #gluster
06:51 dusmant joined #gluster
06:54 ctria joined #gluster
06:55 rjoseph joined #gluster
07:01 raghu joined #gluster
07:01 vimal joined #gluster
07:01 ndarshan joined #gluster
07:02 eseyman joined #gluster
07:03 ekuric joined #gluster
07:13 andreask joined #gluster
07:16 jcsp joined #gluster
07:18 ProT-0-TypE joined #gluster
07:22 deepakcs joined #gluster
07:22 ktosiek joined #gluster
07:23 haomaiwa_ joined #gluster
07:23 asku joined #gluster
07:26 RameshN_ joined #gluster
07:26 mbukatov joined #gluster
07:30 glusterbot New news from resolvedglusterbugs: [Bug 889157] gf_log() not flushing logs properly. <https://bugzilla.redhat.com/show_bug.cgi?id=889157>
07:31 fsimonce joined #gluster
07:47 asku joined #gluster
07:54 Andreas-IPO joined #gluster
07:59 calum_ joined #gluster
08:07 swebb joined #gluster
08:26 ghenry joined #gluster
08:28 Pupeno_ I'm using glusterfs 3.5 on Ubuntu 12.04. I remember mounting on boot working on my experiments, but now, on a staging server, it's failing. I'm getting the error "The disk drive for /.... is not ready yet or not present."
08:28 Pupeno_ And it got stuck.
08:31 overclk Chewi: good to hear that. but I would still like to avoid manual intervension.
08:34 haomaiwang joined #gluster
08:37 haomai___ joined #gluster
08:38 monotek joined #gluster
08:41 liquidat joined #gluster
08:42 haomaiwang joined #gluster
08:44 spandit joined #gluster
08:47 liquidat joined #gluster
08:50 lalatenduM joined #gluster
08:51 capri is it possible to create a gluster brick on top of an lvm volume and resize the volume when needed with lvresize and resize2fs in future?
08:55 jcsp joined #gluster
08:58 Pupeno_ Any ideas why I can't mount at boot time? http://serverfault.com/questions/604860​/glusterfs-is-failing-to-mount-on-boot
08:58 glusterbot Title: ubuntu - GlusterFS is failing to mount on boot - Server Fault (at serverfault.com)
08:59 Alex "[2014-06-13 08:52:28.148183] E [socket.c:2161:socket_connect_finish] 0-glusterfs: connection to 176.58.113.205:24007 failed (Connection refused)"
09:00 Alex suggests that the peer it's trying to connect to isnt' up by the time it tries to mount?
09:00 Pupeno_ Alex: the client and the server are the same machine.
09:01 Alex I assume then that it is trying to mount it before the brick processes have started?
09:02 Pupeno_ Alex: that is a possibility, but I didn't do anything to cause that. Wouldn't that mean that glusterfs 3.5 packages for Ubuntu 12.04 are just broken then?
09:05 Alex I am afraid I have reached my usefulness, Pupeno_. I'm just guessing based on the error message, sorry :)
09:05 Pupeno_ Ok, thanks Alex :)
09:06 dusmant joined #gluster
09:07 Norky four gluster servers are no longer serving NFS (gluster native still works). rpcinfo -p shows no NFS service registered - I think this is beacuse rpcbind (portmapper) was restarted
09:07 Pupeno_ I remember this working when I tested it with local virtual machines. So annoying.
09:07 Norky how do I get glusterd to 're-register' NFS services with the portmapper?
09:08 shubhendu joined #gluster
09:08 calum_ joined #gluster
09:12 Norky http://gluster.org/community/documentation/index​.php/Gluster_3.1:_NFS_Frequently_Asked_Questions suggests I have to restart gluster, but that's for 3.1 - is this still the case?
09:13 glusterbot Title: Gluster 3.1: NFS Frequently Asked Questions - GlusterDocumentation (at gluster.org)
09:14 capri is it possible to create a gluster brick on top of an lvm volume and resize the volume when needed with lvresize and resize2fs in future?
09:14 Norky yes
09:15 Norky as an aside, fsadm combines the functions of lvextend and resize2fs (or grow_xfs in the case of XFS)
09:16 kanagaraj joined #gluster
09:18 capri Norky, thanks
09:18 andreask joined #gluster
09:18 Norky on Red HAt Storage, you're encouraged to use LVM for bricks
09:19 Norky LVM with XFS, that is
09:19 capri Norky, thats the way i wanne do it :)
09:20 capri Norky, but i wasn´t really sure if resizing is possible and i didn´t want to use my hole storage for the cluster brick
09:20 dusmant joined #gluster
09:20 capri Norky, but now i know its really cool :-)
09:28 Slashman joined #gluster
09:32 pureflex_ joined #gluster
09:37 Norky to answer my own question: the least disruptive way to get it working again appears to be simply stopping and starting any volume (so, ideally, an 'unimportant' test volume) causes gluster's NFSd to (re-)register with portmap
09:46 Pupeno_ How do you remove a setting from a volume?
09:48 aravindavk joined #gluster
09:52 rjoseph joined #gluster
09:54 haomaiwang joined #gluster
09:59 kanagaraj_ joined #gluster
10:05 haomaiw__ joined #gluster
10:06 Norky gluster volume reset
10:06 Norky gluster volume reset <optionname> for a specific option
10:09 kanagaraj joined #gluster
10:11 AaronGr joined #gluster
10:15 rjoseph joined #gluster
10:16 harish_ joined #gluster
10:19 aravindavk joined #gluster
10:21 DV joined #gluster
10:21 rastar joined #gluster
10:21 haomaiwa_ joined #gluster
10:23 haomaiwang joined #gluster
10:24 haomai___ joined #gluster
10:26 haomaiwang joined #gluster
10:31 kanagaraj joined #gluster
10:32 haomaiw__ joined #gluster
10:37 bene2 joined #gluster
10:38 hchiramm__ joined #gluster
10:40 haomaiwa_ joined #gluster
10:41 haomaiwang joined #gluster
10:44 DV joined #gluster
10:46 haomaiw__ joined #gluster
10:47 haomaiwa_ joined #gluster
10:52 Pupeno_ I set up two volumes in exactly the same way, and the second one is failing. Checking the logs I found this:  [2014-06-13 10:52:35.423443] E [rpc-clnt.c:369:saved_frames_unwind] (-->/usr/lib/x86_64-linux-gnu/lib​gfrpc.so.0(rpc_clnt_notify+0x160) [0x7f666494cc80] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.​so.0(rpc_clnt_connection_cleanup+0xc3) [0x7f666494b163] (-->/usr/lib/x86_64-linux-gnu/libgf​rpc.so.0(saved_frames_destroy+0xe) [0x7f
10:52 Pupeno_ 666494b07e]))) 0-private_uploads-client-0: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2014-06-13 10:52:35.422950 (xid=0x3)
10:52 Pupeno_ Any ideas what's wrong there?
10:59 shubhendu_ joined #gluster
11:01 kanagaraj_ joined #gluster
11:07 edward1 joined #gluster
11:08 julim joined #gluster
11:31 kanagaraj joined #gluster
11:34 kanagaraj joined #gluster
11:41 kkeithley joined #gluster
11:44 gildub joined #gluster
11:45 lalatenduM kkeithley, welcome back, how was the flight?
11:45 kkeithley uneventful, the best kind. ;-)
11:46 lalatenduM :)
11:56 lalatenduM kkeithley_, patch http://review.gluster.org/7693 related cppcheck errors got merged in master
11:56 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:58 lalatenduM kkeithley_, I have assigned https://bugzilla.redhat.co​m/show_bug.cgi?id=1091677 to you now, as you need to send patch to fix issues which you already sent to 3.4 and 3.5
11:58 glusterbot Bug 1091677: high, high, 3.6.0, kkeithle, ASSIGNED , Issues reported by Cppcheck static analysis tool
12:01 jag3773 joined #gluster
12:01 kkeithley_ yes, I saw that
12:04 kkeithley_ I'm going to clone 1091677 for the remaining work and let 1091677 get closed out by your patch
12:07 lalatenduM kkeithley_,  ok
12:08 lalatenduM kkeithley_, actually it should be in modified state, we can move it to on_qa once build is available
12:09 glusterbot New news from newglusterbugs: [Bug 1109175] [SNAPSHOT] : Snapshot list should display origin volume (RFE) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1109175> || [Bug 1109180] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.co​m/show_bug.cgi?id=1109180>
12:16 B21956 joined #gluster
12:19 kanagaraj_ joined #gluster
12:23 Alpinist joined #gluster
12:27 Alpinist joined #gluster
12:30 haomaiwa_ joined #gluster
12:31 glusterbot New news from resolvedglusterbugs: [Bug 1095324] Release 3.4.4 tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1095324>
12:34 wgao joined #gluster
12:35 haomai___ joined #gluster
12:39 qdk_ joined #gluster
12:39 glusterbot New news from newglusterbugs: [Bug 1101111] [RFE] Add regression tests for the component geo-replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101111>
12:40 jmarley joined #gluster
12:40 jmarley joined #gluster
12:42 bala joined #gluster
12:47 P0w3r3d joined #gluster
12:49 andreask joined #gluster
12:51 kanagaraj joined #gluster
12:52 kanagaraj_ joined #gluster
12:52 bennyturns joined #gluster
12:53 kanagaraj joined #gluster
12:55 sjm joined #gluster
12:59 Ark joined #gluster
13:01 edwardm61 joined #gluster
13:08 jag3773 joined #gluster
13:10 Alpinist joined #gluster
13:12 kanagaraj_ joined #gluster
13:12 bennyturns joined #gluster
13:14 bennyturns joined #gluster
13:15 Alpinist_ joined #gluster
13:17 zyxe joined #gluster
13:18 Alpinist joined #gluster
13:22 bala joined #gluster
13:28 japuzzo joined #gluster
13:28 coredump joined #gluster
13:31 gmcwhistler joined #gluster
13:34 brad_mssw joined #gluster
13:35 shyam joined #gluster
13:37 bala joined #gluster
13:42 daMaestro joined #gluster
13:51 diegows joined #gluster
14:07 jbautista|brb joined #gluster
14:09 lpabon joined #gluster
14:11 LebedevRI joined #gluster
14:26 tdasilva joined #gluster
14:30 uebera|| joined #gluster
14:30 uebera|| joined #gluster
14:39 diegows joined #gluster
14:41 Slashman hello, I have a file that suddenly disappeared from a glusterfs volume, suddenly I can't see it, when I try to recreate it, I have the following error "touch: cannot touch `settings.php': No such file or directory"
14:42 Slashman I can create and read all other file with no issue, any idea of what I must do to either see the original file again or to be able to replace it?
14:43 jobewan joined #gluster
14:49 plarsen joined #gluster
14:56 andreask joined #gluster
15:05 mortuar joined #gluster
15:18 JustinClift lpabon: ping
15:18 glusterbot JustinClift: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
15:22 jag3773 joined #gluster
15:44 Mo_ joined #gluster
15:48 shyam joined #gluster
15:58 sjoeboo joined #gluster
15:59 hagarth joined #gluster
16:04 Alpinist joined #gluster
16:07 zerick joined #gluster
16:29 nbalachandran joined #gluster
16:34 jbd1 joined #gluster
16:45 MacWinner joined #gluster
16:47 ndk joined #gluster
16:48 MacWinner can I test the latest 3.5.1beta2 by installing via yum?  I have a 4-node-replica2 cluster that I could do some tests on
16:48 MacWinner with 4x1TB bricks
16:48 MacWinner i just don't want to install via src.. and I've never installed experimental builds via yum. but happy to do so with a little guidance
16:49 ctria joined #gluster
16:55 ramteid MacWinner: http://download.gluster.org/pub/gluster/glusterfs/ -- no 3.5.1beta2 there, don't know if there are any inofficial builds
16:55 glusterbot Title: Index of /pub/gluster/glusterfs (at download.gluster.org)
16:56 ramteid hmm, rawhide has 3.5.1beta1
16:56 MacWinner i'll keep an eye on it, thanks
16:56 ramteid yw
16:57 MacWinner ramteid, have you tried it out?
16:57 ramteid https://download.gluster.org/pub/gluster/glusterf​s/qa-releases/3.5.1beta2/CentOS/epel-6.5/x86_64/
16:57 glusterbot Title: Index of /pub/gluster/glusterfs/qa-releases​/3.5.1beta2/CentOS/epel-6.5/x86_64 (at download.gluster.org)
16:57 ramteid MacWinner: "oh no!" running 3.4.3 currently.
16:57 ramteid MacWinner: I'm a bit shy on trying QA builds :)
16:58 MacWinner i should be more shy..  I'm at 3.5.0 right now..
16:58 ramteid MacWinner: yes, you should :)
16:59 MacWinner in production..  should have stayed at 3.4.x.. i'll be more cautious.. i have no idea if the issue I faced recently was due to 3.5.0, but would like to get the latest patches soon for sanity purposes
16:59 ramteid MacWinner: I see, yes, update makes sense then. I assume you checked bugzilla for your issue?
17:00 MacWinner if JoeJulian is running at 3.4.x, I'm comfortable being there too :)
17:00 MacWinner ramteid, yeah, I checked.. didn't see anything, so filed a ticket
17:00 ramteid MacWinner: I see.
17:00 JoeJulian :)
17:12 ndarshan joined #gluster
17:13 Matthaeus joined #gluster
17:14 bennyturns joined #gluster
17:19 sputnik13 joined #gluster
17:27 cvdyoung left #gluster
17:35 cfeller joined #gluster
17:40 diegows joined #gluster
17:42 kmai007 joined #gluster
17:54 Matthaeus1 joined #gluster
17:57 MacWinner ramteid, found some here: https://download.gluster.org/pub/gluster/glusterf​s/qa-releases/3.5.1beta2/CentOS/epel-6.5/x86_64/
17:58 glusterbot Title: Index of /pub/gluster/glusterfs/qa-releases​/3.5.1beta2/CentOS/epel-6.5/x86_64 (at download.gluster.org)
17:59 _dist joined #gluster
18:02 mortuar joined #gluster
18:05 mortuar_ joined #gluster
18:06 ramteid MacWinner: yes, that's the link I posted above :)
18:06 MacWinner oh, woops.. I didn't see it before cause didn't look under qa-releases..
18:06 ramteid MacWinner: no problem :)
18:07 MacWinner is it possible to configure the gluster yum repo to get the qa-build?
18:08 lalatenduM joined #gluster
18:08 nneul joined #gluster
18:08 rotbeard joined #gluster
18:09 nneul got a quick queston on mounting with a volume file instead of directly to a server with fuse client - direct to each brick is working fine, but when I use a volume file - it looks like it can't figure out what remote port to use. gfs ver 3.5.0
18:10 nneul https://gist.github.com/nneul/3f9b4cb70f3059353223
18:10 glusterbot Title: gist:3f9b4cb70f3059353223 (at gist.github.com)
18:11 nneul any suggestions?  https://gist.github.com/nneul/03905fa1685880eacde2  is the volume file
18:11 glusterbot Title: gist:03905fa1685880eacde2 (at gist.github.com)
18:15 ramteid MacWinner: I think so, take the usual glusterfs.repo file and change URLs inside
18:16 ramteid MacWinner:did that the last time for 3.4.x to 3.4.3
18:16 ramteid MacWinner: https://download.gluster.org/pub/gluster/gl​usterfs/3.5/3.5.0/RHEL/glusterfs-epel.repo
18:16 MacWinner thanks.. will try it out.. looks like geo-replication is been massively updated in 3.5 too.. could be very useful for syncing my 2 data-centers
18:17 ramteid MacWinner: just modify baseurl properly, that should work
18:17 ramteid MacWinner: if you have currently a glusterfs-epel.repo in use u may need to clear cache and do makecache ... hold on a sec
18:17 MacWinner those are the URLS in my current repo..
18:18 ramteid MacWinner:  yum makecache
18:18 MacWinner ramteid, my current glusterfs.repo has this:  baseurl=http://download.gluster.org/pub/gluster/gluster​fs/LATEST/EPEL.repo/epel-$releasever/$basearch/
18:18 ramteid MacWinner: (it seems I didn't do a clear according to history)
18:18 ramteid MacWinner: yes, that's the "default" latest url
18:18 MacWinner when I downloaded the repo you linked to, it seems to have the same URLs inside of it
18:18 ramteid MacWinner: yes, you need to modify it, said so above :)
18:19 ThatGraemeGuy joined #gluster
18:19 MacWinner oh, sorry.. didn't get much sleep and don't have my glasses on.. i'm a bit off today
18:19 ramteid MacWinner: don't worry :) but thx for clarifying :)
18:20 ramteid MacWinner: replacing LATEST with qa-releases/3.5.1beta2 should do
18:20 ramteid MacWinner: I will cross fingers...
18:21 MacWinner thanks, I think i'm going to try it out tomorrow.. will let you know how it goes!
18:21 MacWinner i've taken notes on your steps
18:21 ramteid MacWinner: perfect, thx
18:23 nneul anyone have any idea why that mount doesn't work?
18:23 zaitcev joined #gluster
18:27 Slashman joined #gluster
18:29 nneul ah, found it... the volume references in the file have to specify the full path on the target brick, not just the volume name.....
18:31 _dist nnuel: why do you prefer to mount it that way?
18:31 nneul my understanding from several articles/tutorials is that specifying multiple sources that way would be more tolerant of failure of a given brick during an attempted mount
18:32 JoeJulian nope
18:32 nneul whereas if using a single brick in the mount would hang if that individual brick were down
18:32 JoeJulian Actually, it also will prevent the ability to make managed changes to the volume options.
18:32 semiosis ,,(mount server)
18:32 glusterbot The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
18:32 nneul scenario then: srv1 is down, srv2, and srv3 are up
18:33 JoeJulian See above
18:33 nneul is there a mount line that will actually work? if rrdns, does it retry until it hits one that actually responds?
18:33 JoeJulian Always has.
18:34 JoeJulian But also, it only matters at mount time.
18:34 nneul does it do them in parallel, or does it have to wait until full tcp timeout?
18:34 JoeJulian Good question. Pretty sure it's SYN timeout.
18:34 nneul yeah, I know that part... concerned about the behavior during rolling maintenance where the clients may be getting booted at same time as one brick was down
18:35 JoeJulian But clients should not be getting booted due to maintenance.
18:35 nneul maintenance of the clients/OS, not of gluster
18:35 JoeJulian As long as you have a replicated volume.
18:35 JoeJulian ok
18:35 JoeJulian Actually, no.
18:35 nneul using this as a backend for an HA cluster.
18:36 nneul or rather, starting to... haven't actually deployed it yet, using a single non-redundant nfs server right now
18:36 JoeJulian Well, SYN timeout if it fails the ARP check, if the port if closed though it'll be instant.
18:37 nneul yeah, good point. in that case, probably a lot simpler to just use the single mount without any config file.
18:45 ProT-0-TypE joined #gluster
18:46 ProT-0-TypE joined #gluster
18:54 _dist nnuel: HA cluster for files or VM? I only ask because if it's VM, and qemu-kvm that product has a special connection for that purpose called libgfapi which in my opinion is better than the fuse mount
18:57 JoeJulian +1
18:58 lpabon joined #gluster
19:01 tdasilva joined #gluster
19:01 zyxe1 joined #gluster
19:10 sjm joined #gluster
19:19 shyam joined #gluster
19:25 ry joined #gluster
19:25 Matthaeus joined #gluster
19:27 diegows joined #gluster
19:30 tdasilva joined #gluster
19:31 sputnik13 joined #gluster
19:36 sputnik13 joined #gluster
19:42 MacWinner joined #gluster
19:43 sputnik13 joined #gluster
19:55 sputnik13 joined #gluster
20:00 jbautista|brb joined #gluster
20:01 ccha3 joined #gluster
20:03 d3vz3r0 joined #gluster
20:03 andreask joined #gluster
20:03 johnmark joined #gluster
20:06 T0aD joined #gluster
20:08 ollybee joined #gluster
20:15 sputnik13 joined #gluster
20:24 sputnik13 joined #gluster
20:33 jbrooks left #gluster
20:43 DV joined #gluster
20:45 sputnik13 joined #gluster
20:53 jbrooks joined #gluster
21:02 Matthaeus joined #gluster
21:06 tdasilva left #gluster
21:24 bennyturns joined #gluster
21:26 Ark joined #gluster
21:41 jbrooks joined #gluster
21:45 JoeJulian Yay. Was able to crash a client in staging. Now to install the patch and try a few more times.
21:45 jbrooks joined #gluster
21:51 MacWinner joined #gluster
21:56 semiosis JoeJulian++
21:57 semiosis @karma JoeJulian
21:57 glusterbot semiosis: JoeJulian has neutral karma.
21:57 semiosis wtf have i been incrementing then!?!?
21:58 JoeJulian @ semiosis++
21:58 glusterbot JoeJulian: semiosis's karma is now 1
21:58 JoeJulian You have to get his attention
22:03 verdurin joined #gluster
22:08 nullck joined #gluster
22:24 eightyeight joined #gluster
22:36 semiosis weak
22:36 semiosis no one is going to know to do that.  usually karma is given with just username++
22:48 partner :)
22:53 Slashman joined #gluster
22:58 partner oh boy, next week upgrade, i wonder how that goes, its always perfect in testing
22:59 JoeJulian semiosis: maybe, but our karma system is incognito.
22:59 JoeJulian ... it's a karma chameleon...
22:59 semiosis ha
23:02 partner i managed to remove the replication from one of our systems but that didn't went exactly as in movies, was left with plenty of duplicate files
23:13 wgao joined #gluster
23:16 andreask joined #gluster
23:19 juhaj_ Can I add bricks to an existing gluster,  to the same filesystem not the whole namespace?
23:20 partner you add bricks to the volume
23:22 partner you can have many volumes of course, maybe refrase the question?
23:27 juhaj_ bricks to the volume was the question
23:28 juhaj_ Do you know if lustre can do that?
23:28 partner yeah, sure you can add bricks to the volume
23:28 juhaj_ Or its equivalent, of course
23:28 juhaj_ Yes, I should have known that, in fact, as I've done that ;)
23:30 partner :)
23:31 partner i'm not familiar with lustre so cannot comment on how it works
23:36 partner its quite in here and past 2AM so i'll head to bed ->
23:36 dtrainor joined #gluster
23:37 dtrainor Hi.  Replaced a physical drive today and based on my notes I should have been able to 'gluster volume replace-brick slow_gv00 localhost:/gluster/bricks/slow/gv00/brick00 localhost:/gluster/bricks/slow/gv00/brick00 commit force', but that doesn't seem to work.  It says the brick is not available.
23:39 dtrainor I believe it was FooBar saying I can replace brick on itself
23:39 dtrainor (sorry to call you out :)  )
23:41 glusterbot New news from newglusterbugs: [Bug 1105439] [USS]: RPC to retrieve snapshot list from glusterd and enable snapd (snapview-server) to refresh snap list dynamically <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105439>
23:42 Matthaeus joined #gluster
23:44 dtrainor I see a lot of info on restoring from a different peer but I only have the one peer
23:52 Ark joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary