Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 MugginsO joined #gluster
00:00 MugginsO I love diagnosing urgent server problems over a flaky home VPN link :-/
00:04 MugginsO joined #gluster
00:12 MugginsM restarted both sets of server daemons (except NFS) and it looks like it's recovering
00:12 MugginsM NewRelic for the win :-/
00:34 verdurin joined #gluster
00:36 john3213 joined #gluster
00:41 john3213 left #gluster
00:46 DV joined #gluster
00:50 martinitime1975 joined #gluster
00:51 martinitime1975 hi all, I'm hoping someone can provide some guidance with a gluster problem
00:52 martinitime1975 I installed gluster3.5 on centos 6.4 servers and clients.  Volume creates, clients can mount, read/write no problem.  Problem occurs when I turn on quotas for a particular folder
00:52 martinitime1975 clients cannot write to the folder with the quota
00:54 martinitime1975 I tried adding the features.limit-usage options, which returns "success", but it doesn't show up in the 'gluster volume info' output
01:01 vpshastry joined #gluster
01:08 hagarth joined #gluster
01:10 gmcwhistler joined #gluster
01:17 martinitime1975 left #gluster
01:17 mjsmith2 joined #gluster
01:30 haomaiwa_ joined #gluster
01:35 bala joined #gluster
01:48 chirino joined #gluster
01:58 sjm joined #gluster
02:00 n0de_ joined #gluster
02:09 vimal joined #gluster
02:24 msciciel joined #gluster
02:25 jvandewege_ joined #gluster
02:26 theron joined #gluster
02:35 bala joined #gluster
02:36 DV joined #gluster
02:37 ceiphas_ joined #gluster
02:43 bharata-rao joined #gluster
02:47 theron joined #gluster
02:52 theron joined #gluster
02:54 primechu_ joined #gluster
02:56 the-me_ joined #gluster
02:56 foobar__ joined #gluster
02:57 k3rmat_ joined #gluster
02:58 tryggvil_ joined #gluster
02:58 jcsp1 joined #gluster
02:58 sage___ joined #gluster
02:58 partner joined #gluster
02:58 txmoose_ joined #gluster
02:58 xymox_ joined #gluster
02:58 sauce_ joined #gluster
02:58 saltsa joined #gluster
02:59 chirino_m joined #gluster
03:03 sjm joined #gluster
03:03 sijis joined #gluster
03:03 sijis joined #gluster
03:04 sijis joined #gluster
03:04 sjm joined #gluster
03:04 jiffe98 joined #gluster
03:04 flowouffff joined #gluster
03:06 hchiramm_ joined #gluster
03:08 sulky joined #gluster
03:09 siel joined #gluster
03:17 kshlm joined #gluster
03:24 MugginsM joined #gluster
03:27 akay hi, does anyone have any experience with gluster reblanace failures?
03:27 MugginsM had them, haven't fixed them :)
03:27 akay haha
03:31 akay mine has turned folders into files... great fun :)
03:31 MugginsM ouch
03:31 raghug joined #gluster
03:33 akay but its pretty strange, it only shows the folders as files from one node, another looks fine
03:33 RameshN joined #gluster
03:34 kanagaraj joined #gluster
03:36 marmalodak joined #gluster
03:44 shubhendu joined #gluster
03:48 rastar joined #gluster
03:52 badone joined #gluster
03:52 DV joined #gluster
04:00 nishanth joined #gluster
04:06 bala joined #gluster
04:07 akay joined #gluster
04:12 harish joined #gluster
04:15 vpshastry joined #gluster
04:19 ndarshan joined #gluster
04:21 davinder joined #gluster
04:28 kshlm joined #gluster
04:30 dusmant joined #gluster
04:35 haomaiwa_ joined #gluster
04:38 saurabh joined #gluster
04:40 XpineX_ joined #gluster
04:42 sahina joined #gluster
04:54 bharata-rao joined #gluster
04:54 bala joined #gluster
04:58 gdubreui joined #gluster
05:02 ppai joined #gluster
05:02 prasanthp joined #gluster
05:04 kshlm joined #gluster
05:05 ctria joined #gluster
05:09 kdhananjay joined #gluster
05:13 chirino joined #gluster
05:16 lalatenduM joined #gluster
05:19 shylesh__ joined #gluster
05:19 ctria joined #gluster
05:21 psharma joined #gluster
05:22 gtobon joined #gluster
05:24 gtobon I just upgrade from 3.3 to 3.5 all work fine except for the Geo-Replication I Got the follow error gluster volume geo-replication gv0_shares nfs1.prod1.whispir.com::/data start
05:24 gtobon One or more nodes do not support the required op version. geo-replication command failed
05:24 hagarth joined #gluster
05:31 Pupeno joined #gluster
05:32 kanagaraj joined #gluster
05:36 raghug joined #gluster
05:38 ravindran1 joined #gluster
05:48 lalatenduM gtobon, geo-rep implemantation has changed in 3.5 and there are separate upgrade steps check https://github.com/gluster/glusterfs/blob/release-3.5/doc/upgrade/geo-rep-upgrade-steps.md
05:48 glusterbot Title: glusterfs/doc/upgrade/geo-rep-upgrade-steps.md at release-3.5 · gluster/glusterfs · GitHub (at github.com)
05:49 lalatenduM gtobon, also check this https://github.com/gluster/glusterfs/blob/release-3.5/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
05:49 glusterbot Title: glusterfs/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md at release-3.5 · gluster/glusterfs · GitHub (at github.com)
05:50 XpineX__ joined #gluster
05:58 vimal joined #gluster
06:00 kshlm joined #gluster
06:04 gtobon I Do all the steeps in this two guides.
06:04 gtobon but still getting the error in the op_version
06:05 lalatenduM gtobon, msvbhat might help u
06:05 raghu joined #gluster
06:12 nshaikh joined #gluster
06:14 glusterbot New news from newglusterbugs: [Bug 1086774] Add documentation for the Feature: Access Control List - Version 3 support for Gluster NFS <https://bugzilla.redhat.com/show_bug.cgi?id=1086774>
06:15 meghanam joined #gluster
06:20 raghug joined #gluster
06:21 gdubreui joined #gluster
06:23 rahulcs joined #gluster
06:26 _Bryan_ joined #gluster
06:34 ctria joined #gluster
06:35 ricky-ti1 joined #gluster
06:37 kdhananjay1 joined #gluster
06:40 raghug joined #gluster
06:46 davinder2 joined #gluster
06:50 bharata-rao joined #gluster
06:50 ramteid joined #gluster
07:01 hagarth joined #gluster
07:02 hchiramm_ joined #gluster
07:09 DV joined #gluster
07:13 keytab joined #gluster
07:13 [o__o] joined #gluster
07:17 rgustafs joined #gluster
07:18 verdurin joined #gluster
07:23 hybrid512 joined #gluster
07:24 Pupeno joined #gluster
07:29 ppai joined #gluster
07:30 edward2 joined #gluster
07:31 ricky-ticky1 joined #gluster
07:34 itisravi joined #gluster
07:38 rgustafs joined #gluster
07:45 chirino_m joined #gluster
07:46 fsimonce joined #gluster
07:48 DV joined #gluster
07:52 kdhananjay joined #gluster
07:54 ppai joined #gluster
07:59 TvL2386 joined #gluster
08:00 edward2 joined #gluster
08:04 hchiramm_ joined #gluster
08:17 liquidat joined #gluster
08:20 social joined #gluster
08:27 ProT-0-TypE joined #gluster
08:32 ngoswami joined #gluster
08:42 haomaiwang joined #gluster
08:47 raghug joined #gluster
08:58 Slashman joined #gluster
09:04 andreask joined #gluster
09:04 andreask joined #gluster
09:05 tryggvil joined #gluster
09:09 davinder4 joined #gluster
09:15 glusterbot New news from newglusterbugs: [Bug 1086781] Add documentation for the Feature: Eager locking <https://bugzilla.redhat.com/show_bug.cgi?id=1086781> || [Bug 1100204] brick failure detection does not work for ext4 filesystems <https://bugzilla.redhat.com/show_bug.cgi?id=1100204>
09:19 mbukatov joined #gluster
09:22 Rikkol joined #gluster
09:28 Rikkol left #gluster
09:33 mdavidson joined #gluster
09:36 ppai joined #gluster
09:41 muhh left #gluster
09:42 dusmant joined #gluster
09:45 glusterbot New news from newglusterbugs: [Bug 1095595] Stick to IANA standard while allocating brick ports <https://bugzilla.redhat.com/show_bug.cgi?id=1095595>
09:56 ira joined #gluster
09:59 vpshastry1 joined #gluster
10:05 rwheeler joined #gluster
10:06 karimb joined #gluster
10:08 ravindran1 joined #gluster
10:13 rastar joined #gluster
10:14 yinyin joined #gluster
10:15 bala joined #gluster
10:17 ccha2 hello ndevos
10:18 ccha2 I saw your post about aux gids
10:18 chirino joined #gluster
10:19 ccha2 from the client, with last version, the limitation is 32 groups right ?
10:19 ccha2 I tested and I can't pass 32 groups
10:20 ndevos ccha2: yes, that is correct
10:21 ndevos ccha2: you can use NFS if you need more (but < 93) groups
10:21 ndevos ccha2: for nfs you would need to set nfs.server-aux-gids iirc, but maybe that was in my email too?
10:22 ccha2 but with NFS, you need to get groups from server side, right ?
10:22 ndevos yes
10:22 ccha2 that's not a good solution, because If you create few volumes for diferents clients groups whit differents user and groups
10:23 ccha2 how can you mange this on server side ?
10:24 milu joined #gluster
10:24 ndevos with any of the current solutions the groups will be resolved server-side...
10:24 milu hi all
10:25 ndevos ccha2: you really have systems that mount a volume and depending on the client-system, certain users have different groups than on other client-systems?
10:25 ccha2 yesyes that's it
10:25 milu I read that iscsi can be used to gluster using the libgfapi
10:26 milu which iscsi implementation do I have to use?
10:26 milu is there any documentation?
10:26 ndevos ccha2: hmm, that is not a use-case that has been considered for now :-/
10:27 ccha2 for example, I have a volume for developpement and another volume for test, and mount from differents clients which have same users but not same groups
10:31 ndevos milu: see https://www.gluster.org/2013/12/libgfapi-and-the-linux-target-driver/ -> https://forge.gluster.org/gfapi-module-for-linux-target-driver-
10:31 glusterbot Title: gfapi module for Linux Target Driver - Gluster Community Forge (at forge.gluster.org)
10:32 milu Yes, I'm there
10:32 ndevos ccha2: yeah, I understand how it can be used, but the more common usage seems to be to have a single infrastructure (LDAP, ..) for users and groups too
10:35 ccha2 we have sql to manage these groups, there are 2 databases
10:36 ccha2 so if mange gids is on client side, that would be ok
10:41 ndevos well, the problem is that none of the network protocols support sending more groups NFS itself is limited to 16 groups, GlusterFS to +/- 93 (but the FUSE limit of 32 is effective)
10:42 ndevos ccha2: you may get around it when you use Samba + vfs_glusterfs and mount over CIFS, but I'm not sure if samba restrics the number of groups somewhere
10:43 ndevos ccha2: that would give you +/- 93 groups, limited by the GlusterFS protocol (or a lower limit in Samba)
10:43 milu regarding the iscsi...
10:43 milu do I have to create the file or create a logical volume?
10:44 ndevos milu: I have no idea, never tried it...
10:44 * ndevos will step out for a bit, he'll be back later
10:45 haomaiwa_ joined #gluster
10:45 milu ok
10:46 milu I got it
10:48 chirino_m joined #gluster
10:53 coredumb Hello folks
10:53 coredumb how do one manage gluster NFS shares rights ?
11:01 kkeithley1 joined #gluster
11:02 Pavid7 joined #gluster
11:06 milu coredumb
11:06 milu this is related to groups/users
11:06 milu and posix rights
11:06 milu I guess you can also use posix acl's
11:06 vpshastry1 joined #gluster
11:07 coredumb milu: ok so i can't setup some non root squashing and the like
11:08 milu yes, of course
11:09 milu this is done at "gluster volume set"
11:09 coredumb http://www.gluster.org/community/documentation/index.php/Translators/features#Translator_features.2Ffilter < with this ?
11:09 glusterbot Title: Translators/features - GlusterDocumentation (at www.gluster.org)
11:09 coredumb maybe it's me lost on the website but i can't seem to find all possible options available
11:10 coredumb using volume set :)
11:10 milu http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options
11:10 glusterbot Title: Gluster 3.2: Setting Volume Options - GlusterDocumentation (at gluster.org)
11:11 milu (thanks bot!) :)
11:12 coredumb yeah this is what i found
11:13 coredumb you tell me there's nothing new since 3.2 ? and there's no root squashing thing there
11:13 coredumb or am i blind ?
11:15 milu docs says that you can enable at config file
11:15 milu I see a lot of references to a bug that cli does not allow that configuration
11:15 coredumb ok so using the filter
11:18 chirino joined #gluster
11:20 coredumb milu: another question, http://download.gluster.org/pub/gluster/glusterfs/doc/HA%20and%20Load%20Balancing%20for%20NFS%20and%20SMB.html this states that it's not a good idea to run NFS and CIFS at the same time
11:20 coredumb is that still the case or is it safe ?
11:21 ricky-ticky joined #gluster
11:23 milu found: volume set dfs root-squash enable
11:24 milu dfs is my volume
11:27 gdubreui joined #gluster
11:28 coredumb milu: oh thanks oi'll note that :D
11:29 coredumb any idea about CIFS ?
11:29 ctria joined #gluster
11:31 coredumb milu: also about the best practices - i know a lot of questions
11:32 coredumb is it advised to hot increase disk size of a brick to increase volume size ?
11:35 karimb joined #gluster
11:39 hchiramm_ joined #gluster
11:45 ricky-ticky1 joined #gluster
11:45 glusterbot New news from newglusterbugs: [Bug 1100262] info file missing from /var/lib/glusterd/vols/ . Causes crash <https://bugzilla.redhat.com/show_bug.cgi?id=1100262> || [Bug 1100251] With glusterfs update to the latest version the existing hook scripts are not saved as rpm save. <https://bugzilla.redhat.com/show_bug.cgi?id=1100251>
11:59 harish_ joined #gluster
12:02 ppai joined #gluster
12:08 andreask joined #gluster
12:09 rahulcs joined #gluster
12:10 itisravi joined #gluster
12:15 rahulcs joined #gluster
12:15 glusterbot New news from newglusterbugs: [Bug 1086749] Add documentation for the Feature: Exposing Volume Capabilities <https://bugzilla.redhat.com/show_bug.cgi?id=1086749>
12:17 yinyin_ joined #gluster
12:18 diegows joined #gluster
12:25 B21956 joined #gluster
12:26 cvdyoung Good morning, how does glusterfs manage the locking of files?  If a file lock is present, and a heal wants to heal that file, it can't right?  What happens if the lock cannot be released and that file never is healed?  Is there a service/procedure to clear the lock for the heal to continue?  Thanks in advance.
12:30 ccha2 ndevos: I added nfs.server-aux-gids: on , It seems nothing changed
12:30 ccha2 permissions for groups are on client side
12:31 ccha2 I tried to stop and start the volume and same thing
12:33 karimb joined #gluster
12:34 pdrakeweb joined #gluster
12:40 pdrakeweb joined #gluster
12:46 sroy joined #gluster
12:50 lpabon joined #gluster
12:54 theron joined #gluster
12:56 nshaikh joined #gluster
13:00 hagarth joined #gluster
13:01 tryggvil joined #gluster
13:03 bennyturns joined #gluster
13:05 Norky joined #gluster
13:11 jag3773 joined #gluster
13:13 raghug joined #gluster
13:15 ccha2 oups it works
13:15 ccha2 I forget to remount it as nfs :(
13:15 dusmantkp_ joined #gluster
13:17 tryggvil joined #gluster
13:21 cvdyoung Anyone know how to manually clear a lock on a file that's mounted over glusterfs?  I've had problems with a file lock that is set for healing, but the heal can't complete because the files locked.  The lock never clears, so the heals start to stack up.  Restarting works, but that seems drastic to me.  "Everything looks like a nail, when you're the hammer"
13:25 Ark joined #gluster
13:29 rahulcs joined #gluster
13:29 prasanth_ joined #gluster
13:30 japuzzo joined #gluster
13:37 kshlm joined #gluster
13:38 mjsmith2 joined #gluster
13:40 Pupeno joined #gluster
13:41 sjm joined #gluster
13:45 prasanthp joined #gluster
13:51 ndk joined #gluster
13:54 mmorsi1 joined #gluster
13:54 vpshastry joined #gluster
14:02 haomaiwa_ joined #gluster
14:02 vpshastry left #gluster
14:04 sauce joined #gluster
14:08 plarsen joined #gluster
14:09 DV joined #gluster
14:11 fullaware joined #gluster
14:14 rahulcs joined #gluster
14:16 wushudoin joined #gluster
14:20 sahina joined #gluster
14:21 tomased joined #gluster
14:21 rahulcs_ joined #gluster
14:22 karimb joined #gluster
14:23 fullaware left #gluster
14:25 rahulcs joined #gluster
14:27 ou812 joined #gluster
14:28 theron joined #gluster
14:29 olisch joined #gluster
14:31 ou812 anyone wanna help a n00b tune for small files?
14:31 olisch hi, is there a preferred upgrade path for updating from 3.2.6 to 3.5?
14:31 ou812 I'm reading tons of google hits and still confused where to focus efforts
14:32 lmickh joined #gluster
14:42 bennyturns joined #gluster
14:43 tryggvil joined #gluster
14:48 fishdaemon joined #gluster
14:49 fishdaemon Herrow
14:49 fishdaemon I need to change hostname on my two gluster machines
14:49 fishdaemon The article everyone is linking to is missing
14:49 fishdaemon http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/
14:50 fishdaemon Can somone point me in the right direction ?
14:51 fishdaemon I was thinking first of just probing two new machines and then detach the old ones, but that can cause dataloss.  Is there someway of gracefully detach a machine?
14:56 ramteid joined #gluster
14:57 ou812 left #gluster
14:59 shubhendu joined #gluster
15:02 coredumb where's the documentation for 3.5 new features like how to use network compression and client encryption ?
15:06 coredumb Ok seems in github
15:08 coredumb mmmmh can't find encryption though
15:12 lalatenduM joined #gluster
15:17 rahulcs joined #gluster
15:22 chirino_m joined #gluster
15:23 jiffe98 anyone exporting gluster to a windows machine?
15:23 bala joined #gluster
15:23 jiffe98 testing hosting web content and importing to iis via cifs but it seems to be really slow
15:24 jiffe98 takes about 30 seconds for a simple asp page to load
15:24 Pupeno joined #gluster
15:24 KennethWilke joined #gluster
15:25 lalatenduM jiffe98, it should not be so slow, are u using glusterfs vfs plugin for samba
15:25 lalatenduM @sambavfs
15:25 glusterbot lalatenduM: http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
15:28 daMaestro joined #gluster
15:30 LoudNoises joined #gluster
15:30 vpshastry joined #gluster
15:33 ndevos jiffe98: small files are not really a sweet spot for gluster, but there are some thing you can improve in case you use ,,(php) or the like
15:33 glusterbot jiffe98: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH
15:33 glusterbot --negative-timeout=HIGH --fopen-keep-cache
15:33 jiffe98 I've got php running fine, takes a second or so to load
15:34 ndevos jiffe98: it is about .php files stored on gluster that can be an issue
15:35 ndevos well, and that probably counts for other scripting languages too
15:35 ceiphas joined #gluster
15:36 jiffe98 this is windows/asp so I imagine the problem would be worse
15:41 jiffe98 so just add attribute-timeout=HIGH,entry-timeout=HIGH,negative-timeout=HIGH,fopen-keep-cache to the options line in fstab?
15:42 ceiphas joined #gluster
15:44 vpshastry left #gluster
15:47 olisch fishdaemon i had to rename my gluster bricks too
15:48 olisch i have been changing from ip address to hostname
15:48 olisch but it should be the same
15:48 olisch i stopped all gluster volumes and glusterd
15:48 olisch and sed the config files and renamed peer files from ipaddress to hostname
15:49 dblack joined #gluster
15:50 jiffe98 those options didn't appear to change anything
15:53 olisch fishdaemon: i used this script for renumbering: http://pastebin.com/VzDL98UW
15:53 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:01 sprachgenerator joined #gluster
16:02 ctria joined #gluster
16:03 ceiphas joined #gluster
16:07 Kainz joined #gluster
16:10 Kainz Hi all, I am trying to setup a samba 4.1.6 vfs glusterfs ....all is working fine exccept i can seem to get Windows ACL's working....it is working fine with unix users....There is Windows ACL support though glusterfs vfs right ?? :)...thx for your time
16:11 asku left #gluster
16:11 jbd1 joined #gluster
16:15 jobewan joined #gluster
16:18 jiffe98 there an ubuntu version of the samba vfs?
16:19 kshlm joined #gluster
16:20 Kainz yeah
16:20 Kainz i am using it
16:21 Kainz https://launchpad.net/~monotek/+archive/samba-vfs-glusterfs  //ubntu
16:21 glusterbot Title: samba-vfs-glusterfs : André Bauer (at launchpad.net)
16:22 ceiphas joined #gluster
16:24 Mo___ joined #gluster
16:29 rahulcs joined #gluster
16:32 zaitcev joined #gluster
16:35 Kainz have anybody seen this problem before....and solved it ?? http://thr3ads.net/gluster-users/2010/08/477382-Gluster-Samba-ACL-issues
16:35 glusterbot Title: thr3ads.net - Gluster users - Gluster -> Samba ACL issues [Aug 2010] (at thr3ads.net)
16:48 sjusthome joined #gluster
16:55 chirino joined #gluster
16:58 chirino joined #gluster
17:06 JoeJulian Kainz: Have you followed https://forge.gluster.org/glusterfs-core/glusterfs/blobs/master/doc/admin-guide/en-US/markdown/admin_ACLs.md
17:06 glusterbot Title: doc/admin-guide/en-US/markdown/admin_ACLs.md - glusterfs in GlusterFS Core - Gluster Community Forge (at forge.gluster.org)
17:22 muhh joined #gluster
17:25 ndk joined #gluster
17:28 rahulcs joined #gluster
17:30 coredump joined #gluster
17:31 rahulcs joined #gluster
17:38 rahulcs joined #gluster
17:47 glusterbot New news from newglusterbugs: [Bug 1095596] doc: Stick to IANA standard while allocating brick ports <https://bugzilla.redhat.com/show_bug.cgi?id=1095596> || [Bug 1095594] Stick to IANA standard while allocating brick ports <https://bugzilla.redhat.com/show_bug.cgi?id=1095594>
17:54 skulker joined #gluster
17:56 rotbeard joined #gluster
17:56 sjoeboo joined #gluster
18:00 prasanthp joined #gluster
18:12 ricky-ticky joined #gluster
18:17 qdk_ joined #gluster
18:22 dblack joined #gluster
18:26 chirino_m joined #gluster
18:29 kmai007 joined #gluster
18:31 kmai007 what would be the best solution.....when i created my gluster path server1:/static, now i want to change it to server1:/static/content, ? this on a 2-node replicated setup....
18:31 kmai007 is it possible to do that while keeping the same volume name?
18:34 jiffe98 that looks like you are keeping the same volume name
18:35 jiffe98 you're just moving /* to /content/ ?
18:35 kmai007 its not, i want the root place for file creation to be in /static/content, instead of /static/
18:36 jiffe98 the volume name is "static" ?
18:36 kmai007 so essentially i want the .glusterfs to be inside /static/content/.glusterfs
18:36 jiffe98 oh I see
18:37 kmai007 i made my gluster root starting point the mount point
18:37 kmai007 which i believe is incorrect if i want to add more volumes
18:37 kmai007 to keep the data separated
18:37 coredumb why not ?
18:38 jbd1 kmai007: that's right.  Any chance you can just stop the cluster, change the volfiles, move the data, then start the cluster again?
18:38 jbd1 kmai007: I'd test that first in a lab :)
18:38 kmai007 jbd1: i agree, i can stop it, but if i stop it how can i modify the vol file?
18:39 jbd1 kmai007: edit files in /etc/glusterd?
18:39 kmai007 and then on the mount point /static just do,,,,,, cd /static; mkdir content; mv * content/
18:40 jbd1 kmai007: and mv .glusterfs content/
18:40 leochill joined #gluster
18:40 kmai007 jbd1 i think i it is in /var/lib/glusterd/vols
18:41 jbd1 kmai007: was just saying might also be worth perusing /var/lib/glusterd
18:41 jbd1 basically, just changing everything to say /static/content where it currently says /static
18:41 kmai007 jbd1 i think i it is in /var/lib/glusterd/vols
18:41 kmai007 but there are tons of vol files for that volum
18:41 jbd1 kmai007: I have files in both locations
18:41 kmai007 gotcha
18:42 kmai007 oh man, if it were like 10GB i'd be all on it
18:42 kmai007 but its 1 TB,
18:42 coredumb question, is it advised to hot increase disk size of a brick to increase volume size ?
18:42 skulker left #gluster
18:42 kmai007 hot increase, like using LVM?
18:42 jbd1 coredumb: it shouldn't be any riskier than a normal hot disk grow
18:42 kmai007 lvextend
18:42 jbd1 coredumb: don't forget to increase the fs after you grow the vol
18:43 coredumb jbd1: indeed
18:43 semiosis coredumb: i do that on prod every ~6 months
18:43 semiosis coredumb: it's superb
18:43 jiffe98 I love windows error: 'A device attached to the system is not functioning'
18:43 coredumb semiosis: so no need to do some gluster voodoo after the resize2fs
18:43 coredumb or xfs remount
18:43 semiosis coredumb: though i'm not using lvm, i just swap out & grow the ebs vols
18:43 coredumb ?
18:43 semiosis no gluster voodoo
18:43 coredumb cool
18:44 coredumb and NFS shares see the size without any glitch
18:44 coredumb ?
18:44 semiosis after running out of inodes on ext4 recently i'll probably never use it for prod again
18:44 coredumb hehe
18:44 semiosis highly recommend xfs
18:44 coredumb yeah that's the plan
18:44 kmai007 fuse/nfs shares should see the growth,
18:44 kmai007 though today, on my clients
18:44 coredumb excellent
18:45 semiosis so it's xfs_growfs not resize2fs
18:45 kmai007 i saw a few that i had to unmount and remount to get the right df
18:45 kmai007 output
18:45 coredumb mmmh
18:45 coredumb semiosis: yeah hence my "xfs remount" cause i forgot the exact command :D
18:45 jiffe98 so using samba fuse, IIS seems to work fine albeit very slow.  Using samba vfs I can access the directory via windows explorer but IIS complains
18:46 jiffe98 and doesn't really give any indication as to why
18:47 semiosis coredumb: idk about nfs clients but fuse clients see the new space immeidately (they see the free space of the smallest replica) i assume nfs is the same
18:47 rahulcs joined #gluster
18:47 coredumb semiosis: ok
18:47 semiosis so if you have a 2x1 volume once you grow both bricks the new space will be available
18:47 coredumb semiosis: neat
18:48 coredumb earlier today i read that NFS + CIFS is not advised for some cache issues
18:48 coredumb is that still true or is it old giberish ?
18:49 jiffe98 although I can copy the home directory of the website's path, stick it in ie, append default.html and it comes up fine
18:50 kmai007 i wish i could rename a volume like i rename an LV
18:58 sman joined #gluster
19:03 lpabon joined #gluster
19:08 gmcwhistler joined #gluster
19:12 SFLimey_ joined #gluster
19:18 Pupeno joined #gluster
19:22 gmcwhist_ joined #gluster
19:40 Igrsrolqak joined #gluster
19:44 ndk joined #gluster
19:44 jiqiren joined #gluster
19:45 Igrsrolqak left #gluster
19:46 XpineX joined #gluster
20:05 fyxim_ joined #gluster
20:09 XpineX_ joined #gluster
20:22 fishdaemon joined #gluster
20:22 JoeJulian kmai007: file a bug report
20:22 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:58 chirino joined #gluster
21:02 kmai007 JoeJulian: bug report can be used for enhancement requests?
21:03 JoeJulian kmai007: You bet!
21:03 kmai007 ok i will do that, thanks
21:03 kmai007 JoeJulian: do you have a chance to scroll up?
21:04 kmai007 i was asking what is the most efficient way to move my 'root' path of a gluster volume to a directory below without creating a new volume?
21:05 JoeJulian If you can have downtime, I like the sed and mv.
21:05 kmai007 do i need to modify gluster vols as well, on all storage nodes?
21:10 primechuck joined #gluster
21:21 siel joined #gluster
21:22 nueces joined #gluster
21:24 andreask joined #gluster
21:35 siel joined #gluster
21:48 siel joined #gluster
21:53 MugginsM joined #gluster
21:55 ira joined #gluster
22:03 tryggvil joined #gluster
22:25 ira joined #gluster
22:29 XpineX__ joined #gluster
22:29 sjm joined #gluster
23:01 chirino_m joined #gluster
23:44 rps joined #gluster
23:49 rps is there a best practice for what raid level (if any) to use on a glusterfs node? i see a lot of people going with raid 5 or 6?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary