Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 hollaus joined #gluster
00:07 theron joined #gluster
00:07 theron joined #gluster
00:13 DV joined #gluster
00:17 bala joined #gluster
00:25 MugginsM joined #gluster
00:29 n-st joined #gluster
00:48 rjoseph joined #gluster
00:51 bala1 joined #gluster
00:55 Pupeno joined #gluster
00:55 Pupeno joined #gluster
00:56 kdhananjay joined #gluster
01:08 theron_ joined #gluster
01:11 lflores left #gluster
01:14 topshare joined #gluster
01:19 n-st joined #gluster
01:21 n-st_ joined #gluster
01:32 kdhananjay joined #gluster
01:33 n-st_ joined #gluster
02:01 kdhananjay joined #gluster
02:10 harish joined #gluster
02:23 kdhananjay joined #gluster
02:35 badone joined #gluster
02:38 meghanam_ joined #gluster
02:39 meghanam joined #gluster
02:45 doekia joined #gluster
02:47 kdhananjay joined #gluster
02:53 msmith_ joined #gluster
02:53 msmith_ joined #gluster
03:14 hagarth joined #gluster
03:20 _Bryan_ joined #gluster
03:25 bharata-rao joined #gluster
03:32 shubhendu joined #gluster
03:39 atalur joined #gluster
03:54 RameshN joined #gluster
03:55 kanagaraj joined #gluster
03:57 MugginsM joined #gluster
04:04 itisravi joined #gluster
04:08 topshare joined #gluster
04:13 kumar joined #gluster
04:16 RameshN joined #gluster
04:25 krullie joined #gluster
04:25 nbalachandran joined #gluster
04:26 ppai joined #gluster
04:30 georgeh joined #gluster
04:34 anoopcs joined #gluster
04:35 rafi1 joined #gluster
04:35 Rafi_kc joined #gluster
04:38 dusmant joined #gluster
04:43 spandit joined #gluster
04:52 georgeh joined #gluster
04:52 hagarth joined #gluster
04:54 kanagaraj joined #gluster
04:57 jiffin joined #gluster
04:57 meghanam joined #gluster
04:57 meghanam_ joined #gluster
04:59 rjoseph joined #gluster
05:00 prasanth_ joined #gluster
05:00 smohan joined #gluster
05:11 ^rcaskey joined #gluster
05:13 samsaffron___ joined #gluster
05:21 lalatenduM joined #gluster
05:25 nshaikh joined #gluster
05:28 kshlm joined #gluster
05:31 dusmant joined #gluster
05:31 smohan_ joined #gluster
05:33 karnan joined #gluster
05:35 ramteid joined #gluster
05:39 smohan joined #gluster
05:43 saurabh joined #gluster
05:47 nbalachandran joined #gluster
05:47 soumya joined #gluster
05:49 johnnytran joined #gluster
05:51 sadbox joined #gluster
05:53 ndarshan joined #gluster
05:59 sahina joined #gluster
06:04 gehaxelt joined #gluster
06:06 rjoseph joined #gluster
06:17 overclk joined #gluster
06:18 ricky-ticky joined #gluster
06:24 SOLDIERz joined #gluster
06:29 rjoseph joined #gluster
06:31 kdhananjay joined #gluster
06:35 raghu` joined #gluster
06:45 nbalachandran joined #gluster
06:47 badone joined #gluster
06:48 soumya joined #gluster
06:56 ppai joined #gluster
06:58 overclk joined #gluster
07:00 ctria joined #gluster
07:03 topshare joined #gluster
07:03 rjoseph joined #gluster
07:04 soumya joined #gluster
07:10 rgustafs joined #gluster
07:11 smohan joined #gluster
07:28 elico joined #gluster
07:36 Fen2 joined #gluster
07:56 harish joined #gluster
07:59 ppai joined #gluster
08:07 rolfb joined #gluster
08:08 R0ok_ joined #gluster
08:12 glusterbot New news from resolvedglusterbugs: [Bug 765522] [glusterfs-3.2.5qa6]: replace brick operation crashed the source brick <https://bugzilla.redhat.com/show_bug.cgi?id=765522>
08:12 RameshN joined #gluster
08:20 glusterbot New news from newglusterbugs: [Bug 1161025] Brick process crashed <https://bugzilla.redhat.com/show_bug.cgi?id=1161025>
08:23 shubhendu_ joined #gluster
08:28 topshare joined #gluster
08:31 deniszh joined #gluster
08:34 haomaiwa_ joined #gluster
08:43 shubhendu_ joined #gluster
08:45 atinmu joined #gluster
08:46 vikumar joined #gluster
08:50 glusterbot New news from newglusterbugs: [Bug 1161034] rename operation doesn't work <https://bugzilla.redhat.com/show_bug.cgi?id=1161034>
08:57 nishanth joined #gluster
08:59 Guest53987 joined #gluster
09:07 Slashman joined #gluster
09:08 vikumar joined #gluster
09:11 Slydder joined #gluster
09:11 Slydder hey all
09:12 Slydder just a quick question about setting options for volumes and clusters. are there any situations where gluster needs to be restarted after changes made to certain options?
09:13 dusmant joined #gluster
09:15 gildub joined #gluster
09:16 atinmu joined #gluster
09:19 shubhendu_ joined #gluster
09:19 haomaiwang joined #gluster
09:19 ndevos Slydder: yes, for some options you need to trigger a regeneration of the .vol files, that happens on "gluster volume stop .. ; gluster volume start .."
09:19 Slydder kk
09:19 Slydder thanks.
09:19 ndevos Slydder: also, changes in /etc/glusterfs/glusterd.vol would require a restart of glusterd
09:20 Slydder ndevos: now that was a bit obvious. lol
09:20 glusterbot New news from newglusterbugs: [Bug 1161037] cli command does not show anything when command times out <https://bugzilla.redhat.com/show_bug.cgi?id=1161037>
09:20 ndevos :P you never know!
09:21 ndevos Slashman: server.allow-insecure is the one option that I always encounter, it's even been added to the release notes: http://blog.gluster.org/2014/11/glusterfs-3-5-3beta2-is-now-available-for-testing/
09:24 T0aD joined #gluster
09:25 Guest53987 joined #gluster
09:25 harish joined #gluster
09:26 SteveCooling hey, is there any updated documentation on geo-replication?
09:27 Slashman_ joined #gluster
09:27 deniszh joined #gluster
09:28 SteveCooling or, to put it in another way. Is this the current one (using GlusterFS 3.5.2)
09:28 SteveCooling https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_geo-replication.md
09:28 glusterbot Title: glusterfs/admin_geo-replication.md at master · gluster/glusterfs · GitHub (at github.com)
09:29 harish joined #gluster
09:29 ProT-0-TypE joined #gluster
09:30 hagarth SteveCooling: yes, that would be the latest.
09:31 SteveCooling thanks a lit
09:31 SteveCooling *lot
09:31 SteveCooling :-)
09:32 hagarth :)
09:35 nshaikh joined #gluster
09:38 ProT-0-TypE joined #gluster
09:42 getup joined #gluster
09:44 hagarth SteveCooling: https://github.com/gluster/glusterfs/blob/release-3.5/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md is the more appropriate on
09:44 glusterbot Title: glusterfs/admin_distributed_geo_rep.md at release-3.5 · gluster/glusterfs · GitHub (at github.com)
09:44 hagarth s/on/one/
09:44 glusterbot What hagarth meant to say was: SteveCooling: https://github.com/gluster/glusterfs/blob/release-3.5/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md is the more appropriate one
09:44 hagarth sorry, I did not notice the differently named geo-rep page in the URL you posted.
09:45 SteveCooling ah
09:46 SteveCooling that's what i was afraid of :)
09:46 dusmant joined #gluster
09:48 nbalachandran joined #gluster
09:50 ppai joined #gluster
09:51 SteveCooling this looks a lot more informative. thanks, hagarth
09:52 hagarth SteveCooling: glad to be providing the right assistance :)
09:57 mator joined #gluster
09:57 mator hello
09:57 glusterbot mator: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:58 mator what is the best practice with glusterfs volume to export via NFS, with native linux NFS server or in-gluster NFS server (volume option nfs.disable) ? thanks
10:02 partner dunno about best practice but given you have already nfs there with glusterfs why add another (and then keep disabling so that they don't overlap)
10:02 partner also, while its a funny feature, you can actually mount the nfs from any of the servers involved in the volume
10:04 partner ie. if your volume for nfs lives on server "4" you can still mount it from server "1" thought that creates unnecessary traffic between 1 <-> 4. especially if you use round-robin dns entry for all the glsuter servers (for the volume file)
10:04 partner and use the same RR for mounting the nfs..
10:05 nbalachandran joined #gluster
10:05 harish joined #gluster
10:06 partner so while not best pracice by all means my few cents still :)
10:09 mator just read from http://gluster.org/community/documentation/index.php/Gluster_3.2:_Manually_Mounting_Volumes_Using_NFS
10:09 mator Note: Gluster NFS server does not support UDP
10:12 glusterbot New news from resolvedglusterbugs: [Bug 1116236] [DHT-REBALANCE]: Few files are missing after add-brick and rebalance <https://bugzilla.redhat.com/show_bug.cgi?id=1116236>
10:12 mator probably the only difference... so going to decide myself what to use with the customer glusterfs
10:12 mator thanks anyway
10:14 gildub joined #gluster
10:21 partner mator: maybe wait for more comments, this is pretty much US-timezone channel so at this point its quite quiet with the experts
10:25 ricky-ticky1 joined #gluster
10:26 ppai joined #gluster
10:27 topshare joined #gluster
10:29 mator partner, thanks for suggestion, but i'm currently fixing clients (customer) problem with NFS sharing of glusterfs volume... It wasn't me implementing glusterfs on this site and i'm lack of implementation documents... so far customer asked why some servers exports are as defined in /etc/exports (linux nfs server) and some as volume name (glusterfs NFS server) and second problem why it has switched to TCP (in
10:29 mator stead of old UDP nfs  mounts)
10:29 mator so it's currently obvious that it was used to be linux nfs server
10:29 mator sorry for not being exact
10:30 mator in first place
10:31 mator i wonder does redhat KB has info on this (linux nfs server vs gluster nfs implementation), wasn't able to find so far
10:35 partner hmm dunno, i'm using both approaches but then again i wouldn't want to use neither one, just being forced
10:36 jvandewege_ joined #gluster
10:38 partner IMO the biggest benefit for gluster nfs is the fact its aware of the volumes while standard nfs isn't
10:43 Guest53987 joined #gluster
10:43 aravindavk joined #gluster
10:45 maveric_amitc_ joined #gluster
10:47 Slashman joined #gluster
10:49 liquidat joined #gluster
10:50 necrogami joined #gluster
10:50 glusterbot New news from newglusterbugs: [Bug 1161066] A disperse 2 x (2 + 1) = 6 volume, kill two glusterfsd program, ls mountpoint abnormal. <https://bugzilla.redhat.com/show_bug.cgi?id=1161066>
10:57 kdhananjay joined #gluster
11:04 haomaiwang joined #gluster
11:05 ppai joined #gluster
11:06 soumya_ joined #gluster
11:13 sahina joined #gluster
11:16 necrogami joined #gluster
11:18 ndarshan joined #gluster
11:20 shubhendu_ joined #gluster
11:21 jvandewege joined #gluster
11:23 Guest53987 joined #gluster
11:26 rgustafs joined #gluster
11:31 Arrfab ndevos: nice blog post about the packaging issue for el6.6 : I wanted myself to write on that would appear on planet.centos.org but you were faster :-)
11:32 ndevos Arrfab: thanks! let me know if you have any inputs, or want me te setup something so that it can land on planet.centos.org
11:33 tg2 joined #gluster
11:33 ndevos ah, well, maybe the planet.centos.org is a little more restricted than the fedoraproject one
11:34 Arrfab ndevos: we have only core members' blogs aggregated on planet.centos.org. I spoke with lalatenduM about that last week, as our gluster nodes were the only one complaining about the update to c6.6
11:34 Arrfab wondering why that happens so often : it's not the first time that the gluster packages shipped in the el land conflict with the gluster.org ones
11:35 smohan_ joined #gluster
11:35 lalatenduM Arrfab, this is the first time conflict came , somehow the version numbers worked in our favor before
11:36 ndevos Arrfab: I'm not sure why is happens "so often", but I will support anything to prevent it in the future
11:36 Arrfab lalatenduM: I'd have to browse my archives but I'm sure we had an issue in the past
11:36 ndevos yeah, I think I remember something about it too
11:36 Arrfab lalatenduM: talking about that : do you plan to rebuild 3.6.1 packages on cbs soon ? (as soon as they're available I mean)
11:36 lalatenduM Arrfab, ohh, may be something I am not aware
11:37 lalatenduM Arrfab, yes thats the plan
11:37 * ndevos isnt on any CentOS list, but feel free to CC me on any annoying issues related to Gluster/RHEL packaging
11:37 Arrfab lalatenduM: cool, let me know and I'll update our 4 nodes gluster cluster directly
11:37 Arrfab ndevos: cool, thanks
11:37 lalatenduM Arrfab, sure
11:38 kkeithley1 joined #gluster
11:38 ndevos Arrfab: if there is a list that I should rather subscribe to, thats fine too, but I probably wont look at many discussions there
11:38 SOLDIERz joined #gluster
11:39 Arrfab ndevos: well, lalatenduM is the "bridge" , as the SIG storage member wrt gluster so he's following that
11:39 ndevos Arrfab: yes, and I'm sure lalatenduM can bring things to my attention :)
11:39 Arrfab lalatenduM: wondering if a dedicated centos.org mailing list would be needed for gluster/storage : we can host one if needed
11:40 ndevos I'd subscribe to a storage-sig@centos.org list, if there is one?
11:41 * ndevos is pretty ignorant to the whole CentOS business - a day just has too few hours
11:42 lalatenduM Arrfab, I think centos-devel is working fine for us, it will also help build a community around storage sig
11:43 lalatenduM Arrfab, ndevos till the time others complains  of storage sig mails in centos-devel ML , IMO we should keep using it :)
11:43 ndevos lalatenduM: maybe you can keep an eye on any gluster related mails there, and forward/include gluster-{devel,users} as you see fit?
11:43 lalatenduM ndevos, yes I am doing that
11:47 Arrfab lalatenduM: wfm :-)
11:49 nshaikh joined #gluster
11:51 glusterbot New news from newglusterbugs: [Bug 1161104] replace-brick start causes gluster to hang <https://bugzilla.redhat.com/show_bug.cgi?id=1161104>
11:53 SOLDIERz joined #gluster
11:58 meghanam_ joined #gluster
11:58 edward1 joined #gluster
11:59 ndevos lalatenduM: I've not seen many details on the gluster.org lists about the storage sig, maybe you can send an update about it in the next few days?
12:00 lalatenduM ndevos, yes, good idea
12:00 Humble lalatenduM, u can put a blog in your space may be ?
12:00 soumya_ joined #gluster
12:00 meghanam joined #gluster
12:01 lalatenduM Humble, yes
12:01 lalatenduM will do that
12:02 jdarcy joined #gluster
12:02 Humble lalatenduM++
12:02 glusterbot Humble: lalatenduM's karma is now 3
12:03 ndevos lalatenduM++ cool, thanks!
12:03 glusterbot ndevos: lalatenduM's karma is now 4
12:03 lalatenduM :)
12:08 topshare joined #gluster
12:09 ndarshan joined #gluster
12:09 shubhendu_ joined #gluster
12:10 getup joined #gluster
12:10 SOLDIERz joined #gluster
12:18 meghanam_ joined #gluster
12:23 SOLDIERz joined #gluster
12:24 necrogami joined #gluster
12:28 ppai joined #gluster
12:29 itisravi_ joined #gluster
12:32 coredump joined #gluster
12:36 monotek joined #gluster
12:46 deniszh left #gluster
12:47 BlackPanx joined #gluster
12:47 BlackPanx hello everyone
12:48 BlackPanx what is best command to check whole gluster status? to be sure it's in healthy state ? peers connected and they have synchronized entries?
12:48 BlackPanx i came to this command:  gluster volume heal STORAGE info
12:48 BlackPanx is this enough to be sure it's up and running as supposed to
12:48 BlackPanx or do i have to connect to each node and verify it's state ?
12:49 chirino_m joined #gluster
12:51 SOLDIERz joined #gluster
12:54 gildub joined #gluster
12:56 LebedevRI joined #gluster
12:56 hagarth joined #gluster
12:57 jdarcy joined #gluster
13:00 Fen1 joined #gluster
13:01 meghanam joined #gluster
13:01 meghanam_ joined #gluster
13:03 getup joined #gluster
13:09 B21956 joined #gluster
13:11 topshare joined #gluster
13:13 T0aD joined #gluster
13:13 shubhendu_ joined #gluster
13:14 calum_ joined #gluster
13:18 topshare joined #gluster
13:27 Slashman joined #gluster
13:29 SOLDIERz hey everyone after setting up my glusterfs-cluster over 12 nodes i find something occured
13:29 mariusp joined #gluster
13:30 SOLDIERz if i'm hitting on the first node gluster peer status everything seems fine all host connected
13:30 SOLDIERz but when I'm entering on a other node in the cluster i does not return me the hostname for the first node only the ip
13:31 SOLDIERz so each node got it's hostname like node02 but only node one is listed with it's ip so like Hostname: 192.168.0.1
13:31 calisto joined #gluster
13:32 SOLDIERz any idea why this happens? I started the peering and also the gluster volume from node01
13:33 bennyturns joined #gluster
13:34 Philambdo joined #gluster
13:35 necrogami joined #gluster
13:42 topshare joined #gluster
13:48 theron joined #gluster
13:54 virusuy joined #gluster
13:54 virusuy joined #gluster
13:57 SOLDIERz any ideas
14:00 ndevos @hostnames
14:00 glusterbot ndevos: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
14:00 ndevos SOLDIERz: that ^ should help :)
14:01 SOLDIERz ndevos thx for that
14:02 bene2 joined #gluster
14:02 bennyturns joined #gluster
14:07 topshare joined #gluster
14:17 bennyturns joined #gluster
14:21 aravindavk joined #gluster
14:22 SteveCooling why is my georeplicating setup not doing deletes on the slave? (3.5.2)
14:24 SteveCooling also seems to not like the changelog change detection very much. The log mentions "_GMaster: falling back to xsync mode" after a while, and it does not change back even after a long time of no changes.
14:27 msmith_ joined #gluster
14:29 theron joined #gluster
14:35 _nixpanic joined #gluster
14:35 _nixpanic joined #gluster
14:36 necrogami joined #gluster
14:37 necrogami joined #gluster
14:43 bala joined #gluster
14:48 jobewan joined #gluster
14:49 itisravi joined #gluster
14:49 jmarley joined #gluster
14:57 nbalachandran joined #gluster
14:59 mariusp joined #gluster
15:00 SteveCooling the logs mention "worker died in startup phase", and the replication status is faulty for a while
15:00 coredump joined #gluster
15:05 plarsen joined #gluster
15:08 plarsen joined #gluster
15:09 msmith__ joined #gluster
15:11 churnd joined #gluster
15:12 Pupeno_ joined #gluster
15:15 _Bryan_ joined #gluster
15:16 mariusp joined #gluster
15:24 failshell joined #gluster
15:26 soumya_ joined #gluster
15:29 davemc joined #gluster
15:36 diegows joined #gluster
15:44 failshell joined #gluster
15:45 bennyturns joined #gluster
15:45 DougBishop joined #gluster
15:46 nshaikh joined #gluster
15:48 _dist joined #gluster
15:49 parallax-lawre-1 joined #gluster
15:54 dusmant joined #gluster
15:54 ctria joined #gluster
16:03 necrogami joined #gluster
16:10 Pupeno joined #gluster
16:10 Pupeno joined #gluster
16:15 parallax-lawrenc joined #gluster
16:16 ctria joined #gluster
16:23 mariusp joined #gluster
16:25 kumar joined #gluster
16:28 kr0w left #gluster
16:36 elico left #gluster
16:36 coredumb joined #gluster
16:37 coredumb Hi folks
16:38 coredumb what would be the easiest way - libgfapi i guess - to access formatable/mountable block devices from a glusterfs volume ?
16:38 ndevos coredumb: iscsi on gluster?
16:39 ndevos coredumb: https://forge.gluster.org/gfapi-module-for-linux-target-driver-
16:39 glusterbot Title: gfapi module for Linux Target Driver / LIO - Gluster Community Forge (at forge.gluster.org)
16:43 coredumb ndevos: is that functionnal ?
16:43 ndevos coredumb: I think so, but I have not tried it yet
16:44 ndevos coredumb: well, there actually is something like that in Fedora already
16:44 ndevos coredumb: scsi-target-utils-gluster.x86_64 is in Fedora 20
16:45 ndevos coredumb: there should be a post about it on blog.gluster.org somewhere too
16:45 coredumb ok
16:46 coredumb was wondering how it would handle openvz containers ...
16:46 * ndevos cant even guess about that
16:47 fandi joined #gluster
16:49 coredumb ndevos: i know that ovz root stored on glusterfs totaly sucks
16:49 coredumb :)
16:49 coredumb so was wondering how it would behave on a block device
16:50 ndevos coredumb: I would expect that using iscsi on gluster would be more similar to the performance of qemu+libgfapi
16:50 ndevos which seems to be acceptible for many uses
16:50 hagarth joined #gluster
16:50 coredumb ndevos: indeed that's what i need to verify
16:54 Pupeno joined #gluster
16:54 Pupeno joined #gluster
16:56 nishanth joined #gluster
16:59 mariusp joined #gluster
17:00 Pupeno_ joined #gluster
17:02 virusuy hi guys
17:02 virusuy i have a gluster with 2 node in replicated mode
17:03 virusuy and im trying to set-up quotas
17:03 virusuy i enabled them, set them, but when i run "gluster volume info VOLNAME" i do not see those limits like "feature.limit-usage xxxxxxxxxxx"
17:03 virusuy i only see features.quota: on
17:04 virusuy no warning or error messages while setting up limits, but limits don't work
17:04 virusuy i'm missing something ?
17:11 meghanam__ joined #gluster
17:14 meghanam joined #gluster
17:21 kanagaraj joined #gluster
17:21 klaas joined #gluster
17:29 zerick joined #gluster
17:48 deniszh joined #gluster
17:51 deniszh joined #gluster
17:54 clutchk joined #gluster
17:58 deniszh joined #gluster
18:02 Slydder joined #gluster
18:06 theron joined #gluster
18:06 calisto joined #gluster
18:09 bene3 joined #gluster
18:13 lalatenduM joined #gluster
18:17 plarsen joined #gluster
18:19 andreask joined #gluster
18:24 theron_ joined #gluster
18:24 nshaikh joined #gluster
18:35 thogue joined #gluster
18:35 ProT-0-TypE joined #gluster
18:37 MrNaviPa_ joined #gluster
18:42 clutchk Hey question about replication when using gluster as back end storage for kvm. Is it true that self heal daemon need to be turned off on the gluster servers? Also is this being addressed in any newer versions of gluster?
18:49 lalatenduM joined #gluster
19:05 lalatenduM joined #gluster
19:18 jackdpeterson joined #gluster
19:20 jackdpeterson Hey all, I'm currently working on expanding my GlusterFS (replica 2 w/ a total of 2 disks -- on on each server) pool from 500G to 1T (adding 1x drive per box). Current setup -- 2x ubuntu 12.04 boxes w/ 3.5 PPA and all updates installed. Are there any gotchas that I should be aware of?
19:21 mariusp joined #gluster
19:22 bennyturns joined #gluster
19:26 davemc Gluster Use survey closes tomorrow, Friday, 7-November. Last chance: https://www.surveymonkey.com/s/DLN7MQX
19:26 glusterbot Title: GlusterFS use survey (at www.surveymonkey.com)
19:33 lalatenduM joined #gluster
19:35 plarsen joined #gluster
19:38 _dist joined #gluster
19:44 rotbeard joined #gluster
19:56 andreask joined #gluster
20:02 theron joined #gluster
20:23 bene3 jackdpeterson, before you run rebalancer, consider upgrading Gluster to 3.6 (right folks?)
20:24 jackdpeterson -- stability... good/okay/probably safe but might bite me ... badly?
20:29 jackdpeterson @bene3 -- I'm not seeing that PPA for the 3.6 line at the moment anyways. Perhaps pending updates/builds I assume
20:56 haomaiwa_ joined #gluster
21:05 kkeithley_ I don't think there's anything per se that's preventing packaging 3.6.0 in the PPA.  We're going to release 3.6.1 Real Soon Now® to mitigate the issue with the RHS-Gluster client-side RPMs that are in RHEL and CentOS, so to keep things simple it's been decided not to package 3.6.0 at all for any distribution.
21:07 kkeithley_ jackdpeterson: ^^^
21:08 jackdpeterson Hey all, not sure if this is related ... but once I performed a gluster volume add-brick permissions are messed up and I'm getting nfs stale handle errors
21:13 ProT-0-TypE joined #gluster
21:14 jackdpeterson ^^ help requested on this nnow ^^ chmodding things is reulsing in cannot read directory ... Stale NFS file handle
21:16 coredump joined #gluster
21:18 jackdpeterson *update -- removing the 2 new bricks resolved issues. *weird*
21:49 jackdpeterson How does one predict rebalance time -- I'm guessing that was the issue (not yet rebalanced after freshly adding the new bricks)
21:50 Pupeno joined #gluster
21:58 badone joined #gluster
22:03 Pupeno_ joined #gluster
22:47 thermo44 joined #gluster
22:52 qubit left #gluster
23:00 MacWinner joined #gluster
23:08 social joined #gluster
23:34 calisto joined #gluster
23:47 thermo44 Hello! Is it wise to do 3 VM's per Server to act like nodes So they can give higher performance? Each VM's will be assign a Raid-5 Array, So that can count as a brick, so I can have 3 nodes on a single server... The reason I ask is because I need a little more performance in writes....

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary