Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 RicardoSSP joined #gluster
00:22 an joined #gluster
00:26 tetreis joined #gluster
00:33 an joined #gluster
00:48 devilspgd A random question, any advantage/disadvantage to IPv4 vs v6 from gluster's point of view? As far as I can tell, it looks like gluster is perfectly happy either way.
00:51 TrDS left #gluster
01:12 nishanth joined #gluster
01:25 lyang0 joined #gluster
01:25 glusterbot News from resolvedglusterbugs: [Bug 1130308] FreeBSD port for GlusterFS <https://bugzilla.redhat.com/show_bug.cgi?id=1130308>
01:40 an joined #gluster
01:46 topshare joined #gluster
02:13 haomaiwa_ joined #gluster
02:45 calisto joined #gluster
02:47 gstock_ joined #gluster
02:52 gstock_ gothos: I'm sorry. i don't understand your answer. For example, I have a samba server that mounts a gluster file system and then exports them.  I mount the gluster volume as root but the files on the system are owned by the users
02:53 gstock_ One other oddity. The output of "dpkg --list |grep gluster" is :
02:53 gstock_ ii  glusterfs-client                   3.6.1-1                       amd64        clustered file-system (client package)
02:53 msciciel joined #gluster
02:54 gstock_ ii  glusterfs-common                   3.6.1-1                       amd64        GlusterFS common libraries and translator modules
02:54 gstock_ but "gluster --version" reports
02:54 gstock_ glusterfs 3.4.1 built on Sep 29 2013 17:33:23
03:09 bharata-rao joined #gluster
03:10 kshlm joined #gluster
03:38 jaank_ joined #gluster
03:43 kanagaraj joined #gluster
03:46 msciciel joined #gluster
03:47 hagarth joined #gluster
03:53 RameshN joined #gluster
03:55 rjoseph joined #gluster
04:02 itisravi joined #gluster
04:09 calisto joined #gluster
04:12 an joined #gluster
04:15 atinmu joined #gluster
04:18 nishanth joined #gluster
04:22 ndarshan joined #gluster
04:37 anoopcs joined #gluster
04:43 bala joined #gluster
04:47 rjoseph joined #gluster
04:48 overclk joined #gluster
04:48 atinmu joined #gluster
04:50 rafi1 joined #gluster
04:54 jiffin joined #gluster
04:57 saurabh joined #gluster
05:08 vimal joined #gluster
05:10 atalur joined #gluster
05:12 poornimag joined #gluster
05:21 bala joined #gluster
05:23 kumar joined #gluster
05:23 sahina joined #gluster
05:24 prasanth_ joined #gluster
05:26 ppai joined #gluster
05:32 nshaikh joined #gluster
05:38 kanagaraj joined #gluster
05:40 atalur joined #gluster
05:44 maveric_amitc_ joined #gluster
05:49 kdhananjay joined #gluster
05:49 kanagaraj joined #gluster
05:50 meghanam joined #gluster
05:50 meghanam_ joined #gluster
05:51 meghanam joined #gluster
05:51 meghanam_ joined #gluster
05:55 rjoseph joined #gluster
05:56 bala joined #gluster
05:58 Humble joined #gluster
05:58 hagarth joined #gluster
06:07 karnan joined #gluster
06:11 atalur joined #gluster
06:11 poornimag joined #gluster
06:13 DV joined #gluster
06:22 kumar joined #gluster
06:33 JoeJulian gstock_: If the package is 3.6.1 and --version reports something different, you probably have a source install in usr/local (or someplace in the path) that's found first.
06:34 sac joined #gluster
06:34 JoeJulian devilspgd: Last time I tried, gluster didn't work with ipv6. I know some work was done in that area, so it may be good now. The only real advantage is the advantages that are inherent in the ipv6 design (and the fact that we've run out of ipv4 addresses).
06:37 devilspgd JoeJulian: In my case, I'm in a datacenter where I have a few advantages.
06:37 sahina joined #gluster
06:38 JoeJulian cool
06:39 devilspgd IPv4 traffic is billed unless I use separate private IPv4 space, which adds a lot of complication. And IPv6 is more predictable, no modifying firewall rules whenever I bring up a new machine.
06:39 devilspgd Sadly I found a couple other issues with going exclusively IPv6, so I still have to deal with IPv4 on the internal network for now, so gluster will stay on IPv4 too until there's another chance of phasing out the IPv4 private network.
06:40 poornimag joined #gluster
06:40 devilspgd I appreciate the answer though :)
06:41 JoeJulian Cool. Be sure and write up a blog article when you bring it up. I'm sure there's lots of people that would like to try it and would have more courage if they could read about someone else's success.
06:44 devilspgd One other randomish question, which will show my ignorance of NFS. With the gluster client, I can point at a DNS record that resolves to any instance of that particular brick and gluster client figures out the rest.
06:44 devilspgd Is there a similar approach for NFS? I have the impression pointing NFS clients at that same DNS record is a bad idea.
06:55 ctria joined #gluster
06:59 atalur joined #gluster
07:05 rgustafs joined #gluster
07:18 jtux joined #gluster
07:18 pcaruana joined #gluster
07:20 karnan joined #gluster
07:35 samkottler joined #gluster
07:56 atinmu joined #gluster
07:59 hagarth joined #gluster
08:06 poornimag joined #gluster
08:07 gothos JoeJulian: dunno, if you remember my problem, but the `du' ran threw and I have a difference of around 1.3TB measured with --apparent-size
08:08 gothos JoeJulian: and the `find' didn't show any directory in .glusterfs/xx/yy/xxyy
08:21 [Enrico] joined #gluster
08:25 anil joined #gluster
08:26 bharata-rao joined #gluster
08:28 LebedevRI joined #gluster
08:28 poornimag joined #gluster
08:35 rolfb joined #gluster
08:38 deniszh joined #gluster
08:38 rgustafs joined #gluster
08:40 nbalacha joined #gluster
08:42 ricky-ticky joined #gluster
08:42 vimal joined #gluster
08:43 aravindavk joined #gluster
08:43 hagarth joined #gluster
08:48 kovshenin joined #gluster
08:52 Slashman joined #gluster
08:55 hchiramm joined #gluster
08:56 TrDS joined #gluster
08:58 atinmu joined #gluster
08:59 poornimag joined #gluster
09:08 hybrid512 joined #gluster
09:15 ceol joined #gluster
09:17 ceol hi, i trying to develop a server monitoring tool with glusterfs, php, on centos/linux
09:18 ceol but when i make a shell script to run gluster commands, it doesn't work
09:19 ceol it all saying, "Connection failed. Please check if gluster daemon is operational."
09:19 ceol how could i solve this problem?
09:24 ninkotech joined #gluster
09:24 ninkotech_ joined #gluster
09:25 soumya joined #gluster
09:27 poornimag joined #gluster
09:30 ndevos ceol: you need to run many of the gluster commands as root
09:31 elico joined #gluster
09:31 ndevos ceol: are you aware of Glubix? Extensions to Zabbix: https://github.com/htaira/glubix
09:32 * ndevos wants to try that out one day
09:39 TrDS left #gluster
09:40 atalur joined #gluster
09:44 aravindavk joined #gluster
09:46 johndescs_ joined #gluster
09:48 cultav1x joined #gluster
10:15 mbukatov joined #gluster
10:16 aravindavk joined #gluster
10:17 mbukatov joined #gluster
10:17 atinmu joined #gluster
10:18 nbalacha joined #gluster
10:19 marbu joined #gluster
10:24 hagarth joined #gluster
10:26 feeshon joined #gluster
10:26 glusterbot News from newglusterbugs: [Bug 1171650] Gerrit: configure clickable links to bugzilla <https://bugzilla.redhat.com/show_bug.cgi?id=1171650>
10:31 karnan joined #gluster
10:36 Norky joined #gluster
10:43 sahina joined #gluster
10:52 bala joined #gluster
10:59 edward1 joined #gluster
11:05 ctria joined #gluster
11:10 calum_ joined #gluster
11:15 tetreis joined #gluster
11:16 rafi joined #gluster
11:20 marbu joined #gluster
11:21 an joined #gluster
11:23 ppai joined #gluster
11:30 kkeithley1 joined #gluster
11:36 aravindavk joined #gluster
11:36 karnan joined #gluster
11:40 soumya_ joined #gluster
11:47 bala joined #gluster
11:50 HACKING-FACEBOOK joined #gluster
11:50 sahina joined #gluster
11:54 meghanam joined #gluster
11:57 glusterbot News from newglusterbugs: [Bug 1147236] gluster 3.6.0 compatibility issue with gluster 3.3 <https://bugzilla.redhat.com/show_bug.cgi?id=1147236>
11:57 glusterbot News from newglusterbugs: [Bug 1171681] rename operation failed on disperse volume with glusterfs 3.6.1 <https://bugzilla.redhat.com/show_bug.cgi?id=1171681>
12:00 ricky-ticky1 joined #gluster
12:18 ppai joined #gluster
12:26 HACKING-FACEBOOK joined #gluster
12:26 calisto joined #gluster
12:27 itisravi_ joined #gluster
12:30 HACKING-FACEBOOK joined #gluster
12:38 itisravi joined #gluster
12:38 aravindavk joined #gluster
12:38 Peanut joined #gluster
12:39 Peanut I just did a 'gluster volume rebalance start', and now I get 'transport endpoint not connected' when trying to do an ls in /gluster :-( Seems my whole gluster just crashed?
12:42 gothos Hm. Anyone here using the centos 5 repo for glusterfs? especially latest/3.6.1, since I'm getting a checksum error. and that persists at least since last thursday
12:43 RicardoSSP joined #gluster
12:46 ppai joined #gluster
12:51 liquidat joined #gluster
12:52 chirino joined #gluster
12:56 rotbeard joined #gluster
12:57 glusterbot News from newglusterbugs: [Bug 1152956] duplicate entries of files listed in the mount point after renames <https://bugzilla.redhat.com/show_bug.cgi?id=1152956>
13:00 plarsen joined #gluster
13:03 anoopcs joined #gluster
13:04 Fen1 joined #gluster
13:16 B21956 joined #gluster
13:18 ndevos gothos: can you 'yum clean metadata packages' and try again? I just re-created the repodata
13:20 aravindavk joined #gluster
13:21 feeshon joined #gluster
13:34 itisravi joined #gluster
13:43 tdasilva joined #gluster
13:43 bene joined #gluster
13:45 plarsen joined #gluster
13:57 glusterbot News from newglusterbugs: [Bug 1105283] Failure to start geo-replication. <https://bugzilla.redhat.com/show_bug.cgi?id=1105283>
14:03 ghenry joined #gluster
14:06 bennyturns joined #gluster
14:12 ctria joined #gluster
14:15 gothos ndevos: still not working for me
14:15 gothos to clarify I am using: http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.1/CentOS/epel-5/x86_64 atm
14:15 itisravi joined #gluster
14:16 virusuy joined #gluster
14:16 virusuy joined #gluster
14:16 B21956 joined #gluster
14:16 Pablo joined #gluster
14:16 ndevos gothos: hmm, what version of glusterfs does yum find there?
14:17 gothos ndevos: none at all, it gives me an error and aborts: [Errno -3] Error performing checksum
14:18 ndevos gothos: maybe like 'yum list glusterfs' ?
14:19 gothos ndevos: nope, same error
14:20 ndevos gothos: and you are confident that the glusterfs repo is the issue?
14:24 * kkeithley1 wonders if the yum repo was mistakenly created with SHA256 instead of MD5
14:26 gothos ndevos: tbh, I wouldn't know what else it could be. I just redownloaded the repo and it's still a problem
14:27 gothos ndevos: but I just remembered... if you build the repo on a centos 6/7 or whatever machine the generated repodata might not be compatible
14:27 gothos since centos 5 needs SHA1 AFAIR
14:29 gothos ndevos: "-s sha"
14:29 Pablo joined #gluster
14:31 hagarth joined #gluster
14:33 kkeithley_ ndevos: you're on d.g.o. I don't want to step on your toes. epel5 repo has two probs, there are 3.6.1-1 and 3.6.1-4 RPMs there, and the wrong hash/signature was used. IIRC I used to use md5. Looks like we got SHA256.
14:35 ndevos kkeithley_: is having the two version in there a problem? I think we should only add versions and not delete older ones...
14:35 ndevos gothos: yes, got it, repodata has been recreated again - can you check?
14:36 kkeithley_ when I bumped the release I used to move the old tree to a save directory
14:37 gothos ndevos: it works now, thx!
14:37 kkeithley_ e.g. /var/www/html/pub/gluster/glusterfs/3.5/3.5.0/EPEL.repo/old/...
14:37 ndevos well, I thought to keep the old version so that people wanting to install the exact same version again could do that
14:38 itisravi joined #gluster
14:38 ndevos as in, they can still 'yum install ...' the version
14:38 ndevos but, I do not think that would be a common case, we can move them, if that is the general preference?
14:39 kkeithley_ okay, it seemed cleaner to only have the current NVR. I still don't know all the "ins and outs" of yum
14:40 samkottler joined #gluster
14:41 gothos hm, okay, got another error: glusterfs-libs-3.6.1-4.el5: Header V4 RSA/SHA1 signature: BAD, key ID 4ab22bb3
14:41 gothos same for glusterfs{,-fuse,-api}
14:42 gothos guess only V3 works?
14:43 gothos not sure tho, we don't use GPG internally
14:43 kkeithley_ Maybe the RPMs got signed as well? EL5 only knows DSA keys, not RSA. And we don't have a DSA signing key.
14:44 ndevos uh, yes, they are signed...
14:45 kkeithley_ Well, we have a DSA signing key. I never figured out how to get rpmsign to use it
14:45 ndevos gothos: you have gpgcheck enabled for that repo? is is off in the template: http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.1/EPEL.repo/glusterfs-epel.repo.el5
14:47 * ndevos gets a quick late lunch and will be back later
14:48 kovsheni_ joined #gluster
14:51 gothos ndevos: nope, I don't. I'm using that file as is
14:53 ndevos kkeithley_: any idea on that?
14:53 * ndevos really steps out now
14:54 nbalacha joined #gluster
15:01 wushudoin joined #gluster
15:02 julim joined #gluster
15:02 RicardoSSP joined #gluster
15:02 RicardoSSP joined #gluster
15:04 kshlm joined #gluster
15:07 lpabon joined #gluster
15:24 davemc joined #gluster
15:35 julim joined #gluster
15:35 kkeithley_ ndevos, gothos: yum --nogpgcheck ...
15:38 _shaps_ joined #gluster
15:40 ndevos kkeithley_: I think that would be the same as setting gpgcheck=o in the .repo file?
15:40 jobewan joined #gluster
15:41 kkeithley_ I don't know what else to suggest
15:43 kkeithley_ short of repopulating the repo with unsigned RPMs
15:44 _dist joined #gluster
15:48 ndevos yeah, same here... I'll install a CentOS-5 and see if I get the same error
15:51 RameshN joined #gluster
15:51 coredump joined #gluster
15:51 kovshenin joined #gluster
15:53 rotbeard joined #gluster
15:55 coredump|br joined #gluster
15:57 kkeithley_ ndevos: `rpmsign --delsign` the EL5 rpms
15:59 ndevos kkeithley_: ah, okay, thats easy - repodata has been regenerated too
16:00 ndevos gothos: care to try again?
16:01 feeshon joined #gluster
16:02 meghanam joined #gluster
16:05 meghanam joined #gluster
16:05 meghanam_ joined #gluster
16:13 bennyturns joined #gluster
16:23 soumya_ joined #gluster
16:25 gothos ndevos: just got home from work, gimme a sec
16:26 kkeithley_ I've figured out how to sign EL5 rpms with the DSA key
16:26 gothos ndevos: I'm again getting the checksum error
16:26 gothos awesome :D
16:27 lmickh joined #gluster
16:29 kkeithley_ did you do a `yum clean all` first?  ndevos regenerated the repo metadata, but yum only refreshes that occasionally unless you clean first
16:32 gothos kkeithley_: yeah I did, just recheck to be sure
16:32 gothos *rechecked
16:32 kkeithley_ okay, let me check
16:33 feeshon joined #gluster
16:34 kkeithley_ oops, wrong sums again. ;-(
16:37 kkeithley_ new repodata again, this time with md5 sums. Should work for you now
16:37 T3 joined #gluster
16:38 rafi1 joined #gluster
16:42 vimal joined #gluster
16:45 gothos kkeithley_: yeah, it's working now, tahnks a lot!
16:48 _br_ joined #gluster
16:50 kkeithley_ yw
16:53 _pol joined #gluster
16:58 glusterbot News from resolvedglusterbugs: [Bug 1018178] Glusterfs ports conflict with qemu live migration <https://bugzilla.redhat.com/show_bug.cgi?id=1018178>
17:06 PeterA joined #gluster
17:09 calisto joined #gluster
17:14 chirino joined #gluster
17:23 TrDS joined #gluster
17:43 kmai007 joined #gluster
17:44 kmai007 all
17:45 kmai007 i forced detach a storage node forcefully, and didn't shrink the volume, now the entire gluster cluster is down,
17:45 kmai007 where is my output from a storage server and its debug output http://fpaste.org/157712/41806070/
17:46 kmai007 where shall i start to try to salvage this...i looked at /var/lib/glusterd/peers and found it was all f'd up so i copied what I wanted to be peers among the servers
17:46 kmai007 within the group
17:47 JoeJulian Yep... "Unable to find friend: omhq1436" of which a volume still uses that as a brick.
17:48 JoeJulian You'll need to add that machine back in to the peer group. The only way I can think that you could do that is to create the peer file following the template of the remaining peer files. Add it to all the servers (but itself, of course) and start it back up.
17:49 JoeJulian No server should have itself in /var/lib/glusterd/peers.
17:49 kmai007 thanks JoeJulian
17:49 kmai007 where can i find the peer ID name, if its removed ?
17:50 kmai007 some where in the .vol files?
17:52 JoeJulian /var/lib/glusterd/glusterd.info
17:52 kmai007 bam! thats right
17:52 JoeJulian what do I win?
17:53 kmai007 can i send you a beer? via amazon
17:53 JoeJulian Hehe
17:54 kmai007 that worked on 1 peer
17:54 kmai007 amazing
18:10 tuxxie joined #gluster
18:17 tuxxie I am planning on moving my current nfs server to a glusterfs replica.  I am looking for documentation related to scoping out hardware.
18:19 kmai007 tuxxie:maybe this can help you https://access.redhat.com/articles/66206
18:29 kmai007 does anyone have a proven process to reduce a volume/remove a storage server ?, i apparently cannot get it done correctly
18:43 JoeJulian ~pasteinfo | kmai007
18:43 glusterbot kmai007: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:02 lpabon joined #gluster
19:10 chirino joined #gluster
19:19 cfeller joined #gluster
19:21 chirino joined #gluster
19:28 ricky-ticky joined #gluster
19:32 dgandhi joined #gluster
19:35 dgandhi greetings all, why does gluster not like a mount point as a brick root ? is this just to avoid mistaking an empty mount for the brick ? Is using btrfs subvolumes (detected as moutnpoints) considered problematic for some other reason ?
19:37 Peanut I think it has to do with setting attributes on the directory.
19:37 JoeJulian dgandhi: If your brick fails to mount, the directory for that mount will still exist. If that mount was for a replicated volume, the data from the good brick would be happily replicated to the root partition. If it's a subdirectory and the brick didn't mount, then the directory won't exist and the brick process fill fail to start.
19:39 dgandhi JoeJulian: so in my use case with btrfs, if the drive does not mount then the subvolume is not a valid route, and would solve the same problem as having a subdir ?
19:39 Peanut JoeJulian: Doesn't have to do with attributes at all? (Just curious, because that's what I always thought).
19:40 JoeJulian dgandhi: Seems valid.
19:41 JoeJulian Peanut: nope
19:41 Peanut JoeJulian: Ok, thanks.
19:42 JoeJulian The whole thing seems odd to me. I thought that they fixed that with the trusted.glusterfs.volume_id attribute.
19:43 ndevos kkeithley++ thanks for figuring out the rpm signing!
19:43 glusterbot ndevos: kkeithley's karma is now 21
19:44 Peanut Ooh, we have karma now?
19:44 Peanut I did a 'gluster volume rebalance start' today, and it caused all my gluster mountpoints to die, had to take down and restart all my VMs :-(
19:44 JoeJulian Yeah, apparently people are in to that sort of thing so by popular demand, I added it
19:45 JoeJulian Peanut: yep, that's what rebalance is really good for still.
19:47 Peanut JoeJulian: The rebalance did work, and 'only' one of the VMs seemed corrupted afterwards. Still, quite a scare and some sweaty palms for a while there. Fortunately the LDAP servers kept running as they had everything in memory, same for the nameservers.
19:55 davemc joined #gluster
20:08 chirino joined #gluster
20:17 zerick joined #gluster
20:21 deniszh joined #gluster
20:33 _dist joined #gluster
20:34 chirino joined #gluster
20:34 LebedevRI left #gluster
20:42 dberry joined #gluster
20:45 ricky-ticky joined #gluster
20:46 dberry I am seeing a lot of remote operation failed:stale nfs file handle errors in the logs
20:46 dberry the files themselves are out of sync on the clients(different size files)
20:47 dberry running 3.3.1
20:52 tdasilva joined #gluster
20:59 dgandhi I don't see anything in the wiki for Distributed-Striped-Replicate volumes, there is a redhat page that calls it a "Technology Preview" - as of 3.4 how alpha is this feature ?
21:01 Slashman joined #gluster
21:02 semiosis dgandhi: repstr - http://www.gluster.org/community/documentation/index.php/WhatsNew3.3
21:03 semiosis actually, this is better: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md
21:04 semiosis https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md#creating-distributed-striped-replicated-volumes
21:06 chirino joined #gluster
21:08 side_control joined #gluster
21:19 julim joined #gluster
21:22 chirino joined #gluster
21:29 glusterbot News from resolvedglusterbugs: [Bug 1080296] The child directory created within parent directory ( on which the quota is set ) shows the entire volume size, when checked with "df" command. <https://bugzilla.redhat.com/show_bug.cgi?id=1080296>
21:36 virusuy joined #gluster
21:50 elyograg joined #gluster
21:51 elyograg I have a gluster install with two volumes.  Six of the brick servers are used for one volume, the other two brick servers are used for the other volume.  I'd like to turn off the second volume and power down those two servers, but I'd prefer if I didn't have to delete the volume.  Is it possible to achieve this?
21:52 elyograg it would be very good if I could power up the servers and re-enable the volume, just in case it turns out we need additional data from it.
21:53 elyograg I see that I can stop a volume ... is that all I'd need to do?
21:54 B21956 left #gluster
22:02 chirino joined #gluster
22:05 jaank joined #gluster
22:13 badone joined #gluster
22:35 badone joined #gluster
22:36 chirino joined #gluster
22:42 bene joined #gluster
23:03 n-st joined #gluster
23:06 gildub joined #gluster
23:08 davemc joined #gluster
23:12 daMaestro joined #gluster
23:21 plarsen joined #gluster
23:33 jaank joined #gluster
23:35 elico joined #gluster
23:40 kmai007 can someone remind me
23:40 kmai007 patch the fuse clients first, then the storage, or the storage first, then the clients last?
23:41 JoeJulian @upgrade notes
23:41 glusterbot JoeJulian: I do not know about 'upgrade notes', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
23:41 JoeJulian @3.4 upgrade notes
23:41 glusterbot JoeJulian: http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
23:41 JoeJulian ... I think...
23:41 JoeJulian Yep, servers first according to hagarth
23:41 kmai007 oh man
23:42 kmai007 uh oh
23:42 kmai007 looks like i'm behind my patch team
23:42 kmai007 they've patched the clients b4 i can get to my storage
23:43 JoeJulian I don't think it's that big of a problem.
23:43 kmai007 yuo're right
23:43 kmai007 but it appears
23:43 kmai007 that the df -h on the fuse client
23:43 kmai007 that is on 3.5.3
23:44 kmai007 and the storage is still on 3.4.2 does not honor the quota
23:44 kmai007 or displays it correctly
23:44 JoeJulian The whole thing should operate at a rpc version that's compatible with the least installed. Once upgraded, you can set the op-version to something higher.
23:44 TrDS left #gluster
23:44 JoeJulian I wish it was just an election process instead of a manual update.
23:45 kmai007 it appears, that the case is true even on a 3.5.3 fuse client
23:45 JoeJulian Maybe I should file a bug report...
23:45 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
23:45 kmai007 yep, was gonna jump on that
23:45 kmai007 just wanted to make sure i didn't fub up the mount, but the df -h display is consistant, its showing the entire brick size
23:46 JoeJulian Yay. Another user with a quota problem.
23:46 * JoeJulian hates quota.
23:46 kmai007 i hope its just b/c of the inflight version
23:46 JoeJulian me too
23:46 JoeJulian I think it might be.
23:47 JoeJulian someone else's quota problem was fixed in 3.5 (or was it 3.6...) by upping the op-version.
23:48 kmai007 i think it is
23:48 kmai007 how do you "up the op-version"
23:49 kmai007 this is the 1st ive heard of this
23:51 kmai007 JoeJulian: its not a bug, i was able to mount the same volume on a fuse-client with the matching glusterfs --version of 3.4.2 and the quota was respected
23:51 semiosis https://botbot.me/freenode/gluster/2014-11-24/?msg=26136108&amp;page=2
23:51 semiosis kmai007: ^
23:55 JoeJulian Which still bugs me because http://www.gluster.org/community/documentation/index.php/Features/Opversion claims it should be transparent.
23:57 kmai007 so, how does one conclude what op-version he is operating on, ?
23:58 kmai007 nevermind
23:59 kmai007 so if i read the table correctly, then 3.5.1 = 30501 =  family of 3.5's
23:59 kmai007 or is it 3.5.3 = 30503 ?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary