Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 gildub joined #gluster
00:14 elico joined #gluster
00:15 Pupeno joined #gluster
00:18 vipulnayyar joined #gluster
00:23 RicardoSSP joined #gluster
00:23 RicardoSSP joined #gluster
00:25 glusterbot News from newglusterbugs: [Bug 1163543] Fix regression test spurious failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163543>
00:30 _nixpanic joined #gluster
00:30 _nixpanic joined #gluster
01:23 bala joined #gluster
01:42 side_control joined #gluster
02:04 Pupeno joined #gluster
02:08 harish joined #gluster
02:08 hagarth joined #gluster
02:22 nangthang joined #gluster
02:25 glusterbot News from newglusterbugs: [Bug 1127457] Setting security.* xattrs fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1127457>
02:31 plarsen joined #gluster
02:33 bala joined #gluster
02:34 ghenry joined #gluster
02:35 sprachgenerator joined #gluster
02:36 sprachgenerator left #gluster
03:03 gildub joined #gluster
03:23 kumar joined #gluster
03:25 glusterbot News from newglusterbugs: [Bug 1192971] Disperse volume: 1x(4+2) config doesn't sustain 2 brick failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192971>
03:33 hagarth joined #gluster
03:33 bharata-rao joined #gluster
03:38 theron joined #gluster
03:41 topshare joined #gluster
03:46 dgandhi joined #gluster
03:53 Pupeno joined #gluster
03:56 hagarth joined #gluster
04:01 nbalacha joined #gluster
04:16 Pupeno joined #gluster
04:24 sage joined #gluster
04:25 shubhendu joined #gluster
04:27 RameshN joined #gluster
04:33 hagarth joined #gluster
04:37 anoopcs joined #gluster
04:41 atinmu joined #gluster
04:41 jiffin joined #gluster
04:46 nilu_1984 joined #gluster
04:55 prasanth_ joined #gluster
05:02 meghanam joined #gluster
05:02 kanagaraj joined #gluster
05:02 soumya_ joined #gluster
05:04 spandit joined #gluster
05:09 ppai joined #gluster
05:09 ndarshan joined #gluster
05:13 RameshN joined #gluster
05:15 Apeksha joined #gluster
05:24 wkf joined #gluster
05:34 lalatenduM joined #gluster
05:36 vimal joined #gluster
05:42 anil joined #gluster
05:45 overclk joined #gluster
05:48 ramteid joined #gluster
05:52 dusmant joined #gluster
05:52 atalur joined #gluster
05:55 soumya_ joined #gluster
05:57 gem joined #gluster
05:57 schandra joined #gluster
05:58 deepakcs joined #gluster
06:03 topshare joined #gluster
06:06 topshare joined #gluster
06:06 kshlm joined #gluster
06:12 hagarth joined #gluster
06:13 karnan joined #gluster
06:15 rafi joined #gluster
06:21 overclk joined #gluster
06:24 raghu joined #gluster
06:27 Bhaskarakiran joined #gluster
06:33 ppai joined #gluster
06:42 Pupeno joined #gluster
06:42 bala joined #gluster
06:44 topshare joined #gluster
06:48 gildub joined #gluster
06:52 dusmant joined #gluster
06:56 glusterbot News from newglusterbugs: [Bug 1192378] Disperse volume: client crashed while running renames with epoll enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1192378>
06:58 topshare joined #gluster
07:16 xaeth_afk joined #gluster
07:16 nangthang joined #gluster
07:17 jtux joined #gluster
07:21 rjoseph joined #gluster
07:24 dusmant joined #gluster
07:30 Pupeno joined #gluster
07:30 badone_ joined #gluster
07:31 jtux joined #gluster
07:38 ackjewt joined #gluster
07:48 ppai joined #gluster
07:59 mbukatov joined #gluster
08:00 SOLDIERz joined #gluster
08:06 nshaikh joined #gluster
08:11 aravindavk joined #gluster
08:23 deniszh joined #gluster
08:26 topshare_ joined #gluster
08:29 SOLDIERz joined #gluster
08:32 kodokuu joined #gluster
08:32 kodokuu Hi, Gluster is Beta mode on Rhel 7 ?
08:34 ndevos kodokuu: no, it should work, you probably should look into the CentOS-7 packages - http://wiki.centos.org/Spe​cialInterestGroup/Storage
08:34 ndevos kodokuu: if there is an issue with them, file a bug and we'll look into it
08:34 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
08:35 kodokuu ndevos http://fpaste.org/192267/
08:36 kodokuu I take http://download.gluster.org/pub/gluster/g​lusterfs/LATEST/RHEL/epel-7Server/x86_64/
08:37 ndevos those packages are fine too, they are basically the same
08:37 ndevos whats the problem that you are facing?
08:37 kodokuu ok so I have this issue on fresh install rhel 7.1
08:37 kodokuu gluster doesn't start
08:37 ndevos hmm, well, the log shows it did?
08:38 ndevos oh, maybe you have SElinux in enforcing mode? there could be issues with that :-/
08:38 kodokuu disabled
08:39 kodoku joined #gluster
08:39 kodoku sorry bug proxy
08:39 kodoku /var/lib/glusterd/glusterd.info missing on my node, i don't know why
08:40 kodoku This is fresh install
08:44 kodoku never see this issue ?
08:45 kodoku ndevos maybe a command for generate this file?
08:46 anil joined #gluster
08:47 ndevos kodoku: it should get generated automatically...
08:48 kodoku arf
08:48 kodoku i tried to remove gluster, reboot, rm all file and repo, fresh install and no file
08:55 shubhendu joined #gluster
09:01 kodoku ndevos I try on other node, same error !! And an other user open same issue ==> https://github.com/paulczar​/docker-glusterfs/issues/1
09:02 lalatenduM joined #gluster
09:03 kodoku Can I try older version ?
09:03 liquidat joined #gluster
09:03 andreask joined #gluster
09:07 fsimonce joined #gluster
09:07 sputnik13 joined #gluster
09:12 kodoku ndevos I found : https://bugzilla.redhat.co​m/show_bug.cgi?id=1192452
09:12 glusterbot Bug 1192452: medium, unspecified, ---, rhs-bugs, NEW , After fresh install of gluster rpm's the log messages shows error for glusterd.info file as no such file or directory
09:12 kodoku so gluster is on Beta
09:12 kodoku no for production (100 bug open in New state since 3 months)
09:14 demmer joined #gluster
09:14 demmer rille
09:14 pebrille sorry
09:14 Norky joined #gluster
09:16 kovshenin joined #gluster
09:18 kodoku Before 3.6, what is the best version ?
09:20 pebrille I'm running a replicate gluster with 2 bricks provided by two servers. The correspding gluster volume is that basis of an ftp cluster sitting on top.
09:21 pebrille when copying some files onto the volume via ftp and run "gluster volume myvolume info heal-failed" I get some entries shown
09:21 T0aD joined #gluster
09:22 pebrille what might be the cause and do I need to worry?
09:27 glusterbot News from resolvedglusterbugs: [Bug 765478] mismatch in calculation for quota <https://bugzilla.redhat.com/show_bug.cgi?id=765478>
09:27 glusterbot News from resolvedglusterbugs: [Bug 821650] Quota says "Disk quota exceeded" even before actually quota exceeds <https://bugzilla.redhat.com/show_bug.cgi?id=821650>
09:29 R0ok_ joined #gluster
09:46 anil joined #gluster
09:49 DV joined #gluster
10:03 ppai joined #gluster
10:09 Apeksha joined #gluster
10:09 sprachgenerator joined #gluster
10:14 [Enrico] joined #gluster
10:22 deniszh1 joined #gluster
10:28 ira joined #gluster
10:30 kodoku left #gluster
10:31 deniszh joined #gluster
10:32 atinmu joined #gluster
10:34 sprachgenerator joined #gluster
10:44 deniszh joined #gluster
10:46 sprachgenerator joined #gluster
10:46 o5k ndevos, can you advise me of good tutorials to get in depth using glusterfs
10:46 o5k i got familiarized with the basics
10:46 o5k i have just discovered the gluster CLI, while browsing the RedHatStorage site
10:46 o5k i want to understand and play with configuration files, etc...
10:46 o5k i'm a little confused, how to make a simple way to access, via web, the data in the storage volume, while ensuring the load balancing, any node can fail...
10:51 Apeksha joined #gluster
10:57 sprachgenerator joined #gluster
11:03 firemanxbr joined #gluster
11:08 o5k JoeJulian,
11:10 ndevos o5k: maybe some of the presentations or videos on http://people.redhat.com/dblack/ help?
11:10 ndevos o5k: I also have some presentations on http://people.redhat.com/ndevos/talks/ - but I do not think they contain references to config files
11:11 ndevos o5k: also, http://www.gluster.org/presos.php has a list of presentations
11:14 firemanxbr hi guys in my servers glusterfs log is very large
11:14 firemanxbr example:
11:15 firemanxbr du -sh /var/log/glusterfs
11:15 firemanxbr 14G     cli.log
11:15 ndevos firemanxbr: there should be a /etc/logrotate.d/ file for the gluster logs?
11:15 firemanxbr any idea about how solve this it ?
11:16 firemanxbr ndevos, hummm I'm checking....
11:16 ndevos firemanxbr: there have been some changes to the file recently, maybe you need an updated version...
11:17 ndevos firemanxbr: https://github.com/gluster/glusterfs/b​lob/master/extras/glusterfs-logrotate is the current version
11:18 firemanxbr ndevos, http://ur1.ca/ju97f
11:18 ndevos hmm, "weekly" could be more often, I guess, and "rotate 52" seems rather large?
11:19 firemanxbr ndevos, I believe good :D
11:19 ndevos firemanxbr: yeah, looks older and cli.log is not listed there :)
11:19 firemanxbr ndevos, my version is:
11:19 firemanxbr glusterfs-3.6.2-1.el7.x86_64
11:20 firemanxbr ndevos, exactly :D
11:20 o5k ndevos, thanks, i have already checked some of them, i'll take a look at the others
11:21 Monster joined #gluster
11:21 o5k ndevos, is it recommanded to use RedHatStorage , or i can use any os (currently using centos7 )
11:22 ndevos firemanxbr: ah, right, the RPMs carried their own logrotate config, that should have been dropped in newer packages <- lalatenduM can maybe confirm that
11:22 firemanxbr ndevos, humm
11:23 firemanxbr ndevos, we have new versions for centos 7 ?
11:24 firemanxbr ndevos, I'm running 'yum upgrade -y' nothing new
11:24 ndevos o5k: Red Hat Storage is the product that is commercially supported by Red Hat, it contains RHEL-6 and Gluster
11:24 lalatenduM firemanxbr, ndevos yeah as of now it is 52 weeks I think
11:24 ndevos firemanxbr: where do you pull the packages from?
11:24 shubhendu joined #gluster
11:24 deepakcs o5k, check this: http://www.gluster.org/documentation​/architecture/internals/Dougw:A_Newb​ie%27s_Guide_to_Gluster_Internals/
11:25 ndevos firemanxbr: ah, no, 3.6.3 is not released yet
11:25 firemanxbr ndevos, ok no problem
11:26 ndevos lalatenduM: will the logrotate changes be included in 3.6.3 too? dropping the ones from dist-git?
11:26 firemanxbr ndevos, I can delete cli.log ? What this it make ?
11:26 lalatenduM ndevos, as of now we are using the log rotate thing from source code, not from dist git
11:27 firemanxbr lalatenduM, this is very important, in my servers disk broken:
11:27 firemanxbr /dev/mapper/centos-root                20G   20G   20K 100% /
11:27 firemanxbr only cli.log 14 GB
11:27 lalatenduM firemanxbr, I understand
11:28 lalatenduM firemanxbr, I think you should change it to 4 weeks or so in your setup,
11:28 lalatenduM Also I think 52 is very high too
11:28 ndevos firemanxbr: I suggest you take the logrotate config from https://raw.githubusercontent.com/gluster/glu​sterfs/release-3.6/extras/glusterfs-logrotate and run logrotate with that :)
11:28 firemanxbr cli.log can clean, for example: # systemctl stop glusterd ; cp /dev/null > /var/log/glusterfs/cli.log ?
11:29 ndevos you do not need to stop glusterd for that
11:29 ndevos and, I would just execute: > /var/log/glusterfs/cli.log
11:29 firemanxbr lalatenduM, thnkz good idea
11:31 firemanxbr ndevos, humm great
11:31 firemanxbr ndevos, running....
11:33 firemanxbr that's all okay, thnkz ndevos and lalatenduM :)
11:34 lalatenduM ndevos++ :)
11:34 glusterbot lalatenduM: ndevos's karma is now 9
11:34 firemanxbr ndevos++
11:34 glusterbot firemanxbr: ndevos's karma is now 10
11:34 firemanxbr lalatenduM++
11:34 glusterbot firemanxbr: lalatenduM's karma is now 6
11:34 lalatenduM firemanxbr, welcome :)
11:37 ndevos lalatenduM, firemanxbr: and send a patch for the updated 52 weeks?
11:38 firemanxbr ndevos, I wish :D how-to create this patch ? you can help me ?
11:38 lalatenduM ndevos, firemanxbr do you think  12 weeks a good number?
11:39 firemanxbr lalatenduM, I agree
11:39 ndevos firemanxbr: sure, do you have an account on review.gluster.org yet?
11:39 firemanxbr lalatenduM, good number
11:40 firemanxbr ndevos, i have yet :D
11:40 firemanxbr ndevos, mr.marcelo.barbosa@gmail.com or firemanxbr@fedoraproject.org
11:41 lalatenduM firemanxbr, check http://www.gluster.org/community/documen​tation/index.php/Simplified_dev_workflow
11:41 ndevos firemanxbr: also upload a public ssh-key in gerrit :)
11:42 snewpy once submitting a bug and a patch on gerrit, is there something further that needs to be done to prod someone to review it?
11:42 snewpy i submitted one bug and patch that got reviewed right away and another one that just passed build testing but has no reviewers attached to it other than the automated one
11:42 snewpy http://review.gluster.org/#/c/9774/
11:43 ndevos snewpy: not really, you could add the maintainers for reviewing it, but they should check regulary to (their names are in the MAINTAINERS file)
11:43 harish joined #gluster
11:44 firemanxbr ndevos, done :D
11:44 ndevos firemanxbr: git clone ssh://ndevos@git.gluster.org/glusterfs (with usename changed)
11:45 ndevos firemanxbr: after the clone, you can: git checkout -t -b logrotate-12-weeks origin/master
11:45 snewpy ndevos: ok, thanks
11:45 firemanxbr ndevos, argh :( in my work port 22 is closed :(
11:45 ndevos snewpy: also, 'git log $filename' can help with finding the right people for a review
11:46 firemanxbr ndevos, i'm trying from http clone
11:46 snewpy ndevos: thanks... I just wasn't sure if the assignment was automatic or somethign that needed to be prodded
11:46 ndevos firemanxbr: setup a Host definition in your ~/.ssh/config for git.gluster.org and set Port to 29418
11:47 firemanxbr ndevos, all ports is closed :(
11:47 ndevos snewpy: its not automatic, although that would be nice - if you have an idea how it can be automated, email the infra list :)
11:47 firemanxbr ndevos, only open is 80 and 443 :( my company is very bastards :]
11:47 ndevos firemanxbr: do you know corkscrew?
11:47 ndevos firemanxbr: oh, maybe thats not sufficient in that case...
11:48 ndevos firemanxbr: posting patches need ssh, I think, not sure if it works over HTTP
11:48 ndevos firemanxbr: maybe you need to set a http password in the gerrit webui for your user?
11:50 firemanxbr ndevos, I'm see...
11:53 ndevos firemanxbr: I think it could work, the wireshark devs seem to be able to push changes over http
11:54 ndevos (to their gerrit instance, not ours ;)
11:54 SOLDIERz joined #gluster
11:54 firemanxbr ndevos, humm good idea, I'm trying broken proxy :]
11:55 ndevos REMINDER: Gluster Community Bug triage meeting starts in 5 minutes in #gluster-meeting
11:55 ndevos firemanxbr: good luck!
11:55 firemanxbr ndevos, I go in my first gluster meeting :D
11:56 ndevos firemanxbr: you are very welcome to join!
11:56 ndevos AND SO IS ANYONE ELSE
11:57 * firemanxbr :]
11:57 LebedevRI joined #gluster
12:15 shubhendu joined #gluster
12:15 malevolent joined #gluster
12:16 bene2 joined #gluster
12:21 nobody18288181 the getting started in depth links are all 404ing on the website
12:22 xavih joined #gluster
12:25 social joined #gluster
12:34 paraenggu joined #gluster
12:35 malevolent joined #gluster
12:35 xavih joined #gluster
12:40 atalur joined #gluster
12:56 Slashman joined #gluster
12:57 glusterbot News from newglusterbugs: [Bug 1198119] PNFS : CLI option to run ganesha servers on every node in trusted pool. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1198119>
12:59 paraenggu left #gluster
12:59 ndevos nobody18288181: could you pass the details to tigert about that? or file a bug, or send an email to gluster-infra@gluster.org
12:59 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
13:00 anoopcs joined #gluster
13:01 7F1AAC6G1 joined #gluster
13:03 rjoseph joined #gluster
13:05 21WABH6ZL joined #gluster
13:07 B21956 joined #gluster
13:08 B21956 left #gluster
13:09 B21956 joined #gluster
13:10 topshare joined #gluster
13:11 Folken_ hello gluster people, if a brick in a disperse volume should die, in 2+1, hwo do you replace it?
13:13 pelox joined #gluster
13:16 elico joined #gluster
13:17 lpabon joined #gluster
13:21 bala joined #gluster
13:22 firemanxbr Folken_, greta post: http://www.gluster.org/community/documentati​on/index.php/Gluster_3.1:_Migrating_Volumes
13:23 firemanxbr Folken_, I believe this it solve your question :]
13:27 glusterbot News from newglusterbugs: [Bug 1057292] option rpc-auth-allow-insecure should default to "on" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1057292>
13:31 rjoseph joined #gluster
13:33 T3 joined #gluster
13:46 bala joined #gluster
13:46 Folken_ firemanxbr: will that work if a brick is dead/missing?
13:46 theron joined #gluster
13:47 firemanxbr Folken_, what return for: gluster peer status ?
13:47 firemanxbr Folken_, or "gluster volume info"  ?
13:48 vimal joined #gluster
13:48 firemanxbr Folken_, please copy and past in fpaste.org, I see, I can help much better.
13:56 wkf joined #gluster
13:57 Bhaskarakiran|af joined #gluster
13:58 tigert nobody18288181: which links?
13:58 tigert nobody18288181: please mail gluster-infra the url you click, and the url of the page it is on?
13:58 tigert I looked at documentation/ and there those links do work for me
13:59 tigert anyway, I got to run, thus I cannot investigate it right now any further
13:59 prasanth_ joined #gluster
13:59 kovshenin joined #gluster
13:59 RameshN joined #gluster
14:03 julim joined #gluster
14:04 shaunm joined #gluster
14:05 o5k tigert, http://www.gluster.org/document​ation/Getting_started_overview/
14:06 o5k tigert, many of the links in the top : Install, Configure, etc... => 404
14:08 Folken_ firemanxbr: my question is theoretical, it hasn't happened yet
14:15 soumya_ joined #gluster
14:15 sprachgenerator joined #gluster
14:17 prasanth_ joined #gluster
14:22 firemanxbr Folken_, okay, no problem :)
14:30 georgeh-LT2 joined #gluster
14:30 dgandhi joined #gluster
14:32 SOLDIERz joined #gluster
14:35 diegows joined #gluster
14:36 _Bryan_ joined #gluster
14:39 ninkotech joined #gluster
14:39 ninkotech_ joined #gluster
14:44 SOLDIERz joined #gluster
14:48 Folken_ firemanxbr: I just want to put together a procedure/plan so my staff know wtf to do if a brick totally dies
14:49 Folken_ firemanxbr: recreate brick, add back to pool, run a heal on all files
14:49 firemanxbr Folken_, hummm
14:50 firemanxbr Folken_, I think this process is manual, intervention
14:51 firemanxbr Folken_, and this process depends of your architecture of bricks
14:52 firemanxbr Folken_, for example: replicate 1 x 1, I think one simple solution, bash script or python or golang, anyway, recreate other brick, based in first brick replicated
14:53 firemanxbr Folken_, other option is push your backup files, I think this is much personal, your plan, but all ideas is simulate a operator human
14:54 firemanxbr Folken_, gluster made for mission critical solutions
14:55 firemanxbr Folken_, good guide for think in the best plan in:
14:56 soumya joined #gluster
14:57 firemanxbr Folken_, https://access.redhat.com/documentat​ion/en-US/Red_Hat_Storage/3/html/Adm​inistration_Guide/part-Overview.html
14:57 firemanxbr Folken_, good luck :]
15:00 Apeksha joined #gluster
15:05 TealS joined #gluster
15:08 plarsen joined #gluster
15:14 lalatenduM joined #gluster
15:15 bennyturns joined #gluster
15:18 SOLDIERz joined #gluster
15:19 gem joined #gluster
15:21 meghanam joined #gluster
15:23 nangthang joined #gluster
15:26 TealS joined #gluster
15:26 coredump joined #gluster
15:31 shubhendu joined #gluster
15:35 [Enrico] joined #gluster
15:37 mkzero joined #gluster
15:54 RameshN joined #gluster
15:56 kovshenin joined #gluster
15:56 shubhendu_ joined #gluster
15:57 plarsen joined #gluster
16:01 lifeofgu_ joined #gluster
16:02 gem joined #gluster
16:03 lpabon joined #gluster
16:04 shaunm joined #gluster
16:04 ron-slc_ joined #gluster
16:05 TealS joined #gluster
16:09 shubhendu__ joined #gluster
16:16 RameshN joined #gluster
16:18 nbalacha joined #gluster
16:26 rjoseph joined #gluster
16:27 andreask left #gluster
16:29 kshlm joined #gluster
16:39 bala joined #gluster
16:43 RameshN joined #gluster
16:43 vipulnayyar joined #gluster
16:52 diegows joined #gluster
16:52 jmarley joined #gluster
16:55 xavih joined #gluster
16:58 T3 joined #gluster
17:04 shubhendu__ joined #gluster
17:15 bala joined #gluster
17:16 rafi joined #gluster
17:17 Maya_ joined #gluster
17:19 mayae joined #gluster
17:26 sputnik13 joined #gluster
17:33 shubhendu_ joined #gluster
17:38 capri joined #gluster
17:57 jobewan joined #gluster
17:58 mayae joined #gluster
18:07 rwheeler joined #gluster
18:13 Rapture joined #gluster
18:14 SOLDIERz joined #gluster
18:16 vipulnayyar joined #gluster
18:27 PeterA joined #gluster
18:29 Prilly joined #gluster
18:36 rotbeard joined #gluster
18:39 Philambdo1 joined #gluster
18:48 mayae joined #gluster
18:51 deniszh joined #gluster
18:52 chirino joined #gluster
19:01 pdrakeweb joined #gluster
19:19 ira joined #gluster
19:23 bala joined #gluster
19:25 mayae joined #gluster
19:28 btspce joined #gluster
19:33 btspce Hi, im running a distributed-replicated cluster (EL7 Gluster 3.6.2). 4 nodes with 4 drives each (JBOD one brick per harddrive) replica 2. I need to take one of the drives offline as i suspect it is going to die soon. What is the correct way to do it? #gluster volume remove-brick gv0 replica 1 host1:/srv/sdc1 force ?
19:42 lkoranda joined #gluster
19:43 Lee- joined #gluster
19:50 B21956 joined #gluster
20:02 JoeJulian btspce: Just kill glusterfsd for that brick, replace the drive, set the ,,(volume-id) and gluster volume start $volname force
20:02 glusterbot btspce: The volume-id is an extended attribute on the brick root which identifies that brick for use with a specific volume. If that attribute is missing, gluster assumes that the brick did not mount and will not start the brick service for that brick. To set the id on a replaced brick, read it from another brick "getfattr -n trusted.glusterfs.volume-id -d -e hex $brick_root" and set
20:02 glusterbot it on the new brick with "setfattr -n trusted.glusterfs.volume-id".
20:04 JoeJulian @change "volume-id" 1 "s/that brick/it/"
20:04 glusterbot JoeJulian: Error: The command "change" is available in the Factoids, Herald, and Topic plugins.  Please specify the plugin whose command you wish to call by using its name as a command before "change".
20:04 JoeJulian @factoids change "volume-id" 1 "s/that brick/it/"
20:04 glusterbot JoeJulian: The operation succeeded.
20:05 JoeJulian @factoids change "volume-id" 1 "s/the brick service/glusterfsd/"
20:05 glusterbot JoeJulian: The operation succeeded.
20:06 JoeJulian @factoids change "volume-id" 1 "s/To set the id.*//"
20:06 glusterbot JoeJulian: The operation succeeded.
20:06 semiosis s/foo/bar/
20:06 glusterbot What semiosis meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
20:07 JoeJulian @learn volume-id as 'To set the id on a replaced brick, read it from another brick "getfattr -n trusted.glusterfs.volume-id -d -e hex $brick_root" and set it on the new brick with "setfattr -n trusted.glusterfs.volume-id -v $volume_id".'
20:07 glusterbot JoeJulian: The operation succeeded.
20:07 JoeJulian yeah, fixing that is on my to-do list.
20:28 hagarth joined #gluster
20:28 DV joined #gluster
20:39 xiu joined #gluster
20:59 mayae joined #gluster
21:03 sputnik13 joined #gluster
21:21 mayae Hi there, I have two nodes in a volume that are set up in replicate mode that geo-replicate to another two nodes (also in replicate mode). If I wanted to upgrade the Gluster version on all the nodes, would I need to pause/stop geo-replication?
21:30 rwheeler joined #gluster
21:31 Prilly joined #gluster
21:35 chirino joined #gluster
21:38 JoeJulian mayae: I would.
21:39 mayae Cool, thanks
21:47 shaunm joined #gluster
21:57 mayae joined #gluster
22:05 btspce joined #gluster
22:11 rotbeard joined #gluster
22:12 mayae joined #gluster
22:18 plarsen joined #gluster
22:23 B21956 left #gluster
22:34 btspce JoeJulian: Thanks, will try tomorrow. Is this not doable using the remove-brick command?
22:34 JoeJulian You would need to remove both replica or change your entire volume to replica 1. Seems like way too much overkill.
22:37 jobewan joined #gluster
22:45 btspce Is this something that will be fixed or will remove-brick never work for removing one brick in a replica?
22:51 elico joined #gluster
22:54 ira joined #gluster
22:59 JoeJulian It's not really intended to do that, no.
22:59 JoeJulian You can replace brick.
22:59 ira joined #gluster
23:00 JoeJulian Why would you want to permanently remove one half of a replica pair?
23:04 btspce To remove a failing drive in a jbod replica setup for example where there is one brick per drive.
23:05 JoeJulian So you just want to permanently remove it, not replace it?
23:05 btspce no it would be replaced..
23:06 btspce so remove-brick is more for shrinking the whole volume?
23:07 JoeJulian I could see a need for a replace-brick..--inplace command that does the volume-id thing for you, but correct, remove-brick is for shrinking the volume.
23:07 glusterbot JoeJulian: replace-brick..'s karma is now -1
23:07 JoeJulian bite me glusterbot.
23:09 btspce wasn't replace-brick deprecated?
23:11 JoeJulian No
23:11 JoeJulian replace-brick...start was.
23:11 JoeJulian which is stupid.
23:13 JoeJulian Just cross your fingers that the increased load on the good replica doesn't kill the drive and/or performance...
23:13 JoeJulian @meh
23:13 glusterbot JoeJulian: I'm not happy about it either
23:13 btspce ok thanks for clarifying that..
23:33 wushudoin joined #gluster
23:59 wkf joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary