Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 Pupeno joined #gluster
00:04 Pupeno joined #gluster
00:25 Pupeno joined #gluster
00:25 Pupeno joined #gluster
00:59 plarsen joined #gluster
01:07 diegows joined #gluster
01:23 fandi joined #gluster
01:45 fandi joined #gluster
01:58 fandi joined #gluster
02:01 harish_ joined #gluster
02:11 bala joined #gluster
02:26 lalatenduM joined #gluster
02:31 fandi joined #gluster
03:05 hagarth joined #gluster
03:14 anoopcs joined #gluster
03:27 suman_d_ joined #gluster
03:27 badone joined #gluster
03:33 fandi joined #gluster
03:35 glusterbot News from newglusterbugs: [Bug 1176011] Client sees duplicated files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1176011>
03:38 kanagaraj joined #gluster
03:42 shubhendu joined #gluster
03:46 itisravi joined #gluster
03:49 elico joined #gluster
03:52 elico joined #gluster
04:06 atinmu joined #gluster
04:07 kdhananjay joined #gluster
04:14 nbalacha joined #gluster
04:21 RameshN joined #gluster
04:32 R0ok_ joined #gluster
04:36 glusterbot News from newglusterbugs: [Bug 1163543] Fix regression test spurious failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163543>
04:36 glusterbot News from newglusterbugs: [Bug 851956] Geo Replication Supportability enhancements <https://bugzilla.redhat.com/show_bug.cgi?id=851956>
04:36 glusterbot News from newglusterbugs: [Bug 851957] Geo Replication Cgroups Policy <https://bugzilla.redhat.com/show_bug.cgi?id=851957>
04:45 kumar joined #gluster
04:51 msmith_ joined #gluster
04:52 spandit joined #gluster
05:03 badone joined #gluster
05:05 ndarshan joined #gluster
05:06 glusterbot News from newglusterbugs: [Bug 1065631] dist-geo-rep: gsyncd in one of the node crashed with "OSError: [Errno 2] No such file or directory" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1065631>
05:06 glusterbot News from resolvedglusterbugs: [Bug 1036539] Distributed Geo-Replication enhancements <https://bugzilla.redhat.co​m/show_bug.cgi?id=1036539>
05:06 glusterbot News from resolvedglusterbugs: [Bug 1025951] slave's timestamp mark attr should be '*.stime' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1025951>
05:06 glusterbot News from resolvedglusterbugs: [Bug 1025952] hybrid crawl does not sync symlinks (and error's out) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1025952>
05:06 glusterbot News from resolvedglusterbugs: [Bug 1025966] RFE: use tar+ssh as an option for data synchronization <https://bugzilla.redhat.co​m/show_bug.cgi?id=1025966>
05:06 glusterbot News from resolvedglusterbugs: [Bug 1024467] Dist-geo-rep : Change in meta data of the files on master  doesn't get propagated to slave. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1024467>
05:17 aravindavk joined #gluster
05:20 Pupeno joined #gluster
05:20 bala joined #gluster
05:21 karnan joined #gluster
05:31 rafi1 joined #gluster
05:36 glusterbot News from newglusterbugs: [Bug 1117018] Geo-rep: No cleanup for files "XSYNC-CHANGELOG" at working_dir for master volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117018>
05:36 glusterbot News from newglusterbugs: [Bug 1083963] Dist-geo-rep : after renames on master, there are more number of files on slave than master. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1083963>
05:36 glusterbot News from newglusterbugs: [Bug 1117010] Geo-rep: No cleanup for "CHANGELOG" files at bricks from master and slave volumes. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117010>
05:36 glusterbot News from newglusterbugs: [Bug 1105283] Failure to start geo-replication. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1105283>
05:36 glusterbot News from newglusterbugs: [Bug 1108502] [RFE] Internalize master/slave verification (gverify) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1108502>
05:36 glusterbot News from newglusterbugs: [Bug 1116168] RFE: Allow geo-replication to slave Volume in same trusted storage pool <https://bugzilla.redhat.co​m/show_bug.cgi?id=1116168>
05:36 glusterbot News from newglusterbugs: [Bug 1131447] [Dist-geo-rep] : Session folders does not sync after a peer probe to new node. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1131447>
05:36 glusterbot News from newglusterbugs: [Bug 1136312] geo-rep mount broker setup has to be simplified. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1136312>
05:36 glusterbot News from resolvedglusterbugs: [Bug 1099041] Dist-geo-rep : geo-rep create push-pem fails to push pem keys to slaves and consequently fails to setup geo-rep. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1099041>
05:36 glusterbot News from resolvedglusterbugs: [Bug 1081337] geo-replication create doesn't take into account ssh identity file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1081337>
05:36 glusterbot News from resolvedglusterbugs: [Bug 1121072] [Dist-geo-rep] : In a cascaded setup, after hardlink sync, slave level 2 volume has sticky bit files found on mount-point. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1121072>
05:36 glusterbot News from resolvedglusterbugs: [Bug 1096025] Geo-replication helper script (gverify.sh) syntax errors <https://bugzilla.redhat.co​m/show_bug.cgi?id=1096025>
05:36 glusterbot News from resolvedglusterbugs: [Bug 1146263] Initial Georeplication fails to use correct GID on folders ONLY <https://bugzilla.redhat.co​m/show_bug.cgi?id=1146263>
05:46 haomaiwa_ joined #gluster
05:49 bala joined #gluster
05:51 anil joined #gluster
05:52 atalur joined #gluster
05:56 dusmant joined #gluster
05:57 badone joined #gluster
05:59 nshaikh joined #gluster
06:06 glusterbot News from newglusterbugs: [Bug 847842] [RFE] Active-Active geo-replication <https://bugzilla.redhat.com/show_bug.cgi?id=847842>
06:06 glusterbot News from newglusterbugs: [Bug 851960] [RFE] Use libgfapi with Geo Replication <https://bugzilla.redhat.com/show_bug.cgi?id=851960>
06:06 glusterbot News from newglusterbugs: [Bug 915996] [RFE] Cascading Geo-Replication Weighted Routes <https://bugzilla.redhat.com/show_bug.cgi?id=915996>
06:06 glusterbot News from newglusterbugs: [Bug 1165140] hardcoded gsyncd path causes geo-replication to fail on non-redhat systems <https://bugzilla.redhat.co​m/show_bug.cgi?id=1165140>
06:06 glusterbot News from newglusterbugs: [Bug 1165142] hardcoded gsyncd path causes geo-replication to fail on non-redhat systems <https://bugzilla.redhat.co​m/show_bug.cgi?id=1165142>
06:06 glusterbot News from newglusterbugs: [Bug 1171313] Failure to sync files to slave with 2 bricks. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1171313>
06:06 glusterbot News from newglusterbugs: [Bug 1172058] push-pem does not distribute common_secret.pem.pub <https://bugzilla.redhat.co​m/show_bug.cgi?id=1172058>
06:06 glusterbot News from newglusterbugs: [Bug 1173732] Glusterd fails when script set_geo_rep_pem_keys.sh is executed on peer <https://bugzilla.redhat.co​m/show_bug.cgi?id=1173732>
06:06 glusterbot News from newglusterbugs: [Bug 1162905] hardcoded gsyncd path causes geo-replication to fail on non-redhat systems <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162905>
06:07 soumya joined #gluster
06:09 nishanth joined #gluster
06:10 vikumar joined #gluster
06:15 hchiramm joined #gluster
06:20 bala joined #gluster
06:21 prasanth_ joined #gluster
06:34 nshaikh joined #gluster
06:46 atalur joined #gluster
06:52 LebedevRI joined #gluster
06:57 TvL2386 joined #gluster
07:06 glusterbot News from newglusterbugs: [Bug 1029597] Geo-replication not work, rsync command error. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1029597>
07:06 glusterbot News from newglusterbugs: [Bug 1032172] Erroneous report of success report starting session <https://bugzilla.redhat.co​m/show_bug.cgi?id=1032172>
07:06 glusterbot News from newglusterbugs: [Bug 997206] [RFE] geo-replication to swift target <https://bugzilla.redhat.com/show_bug.cgi?id=997206>
07:20 Pupeno joined #gluster
07:20 raghu` joined #gluster
07:34 pcaruana joined #gluster
07:36 fandi_ joined #gluster
07:56 jvandewege_ joined #gluster
07:58 AaronGreen joined #gluster
07:58 suman_d_ joined #gluster
07:59 misko__ joined #gluster
08:00 xavih_ joined #gluster
08:00 Bardack joined #gluster
08:00 rafi joined #gluster
08:01 hchiramm__ joined #gluster
08:01 bala joined #gluster
08:02 frankS2 joined #gluster
08:02 dockbram joined #gluster
08:03 _br_ joined #gluster
08:06 glusterbot News from newglusterbugs: [Bug 1136769] AFR: Provide a gluster CLI for automated resolution of split-brains. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1136769>
08:07 Philambdo joined #gluster
08:10 hchiramm joined #gluster
08:21 JonathanS joined #gluster
08:35 fsimonce joined #gluster
08:37 Philambdo joined #gluster
08:41 fandi_ joined #gluster
08:49 malevolent joined #gluster
08:55 saurabh joined #gluster
09:04 atalur joined #gluster
09:04 Leo__ joined #gluster
09:05 maveric_amitc_ joined #gluster
09:11 Pupeno joined #gluster
09:13 aravindavk joined #gluster
09:19 Leo__ Hello
09:19 glusterbot Leo__: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:20 hchiramm joined #gluster
09:21 Leo__ hello, guys. I'm using glusterfs 3.2 on my system. however I have some problem with it.
09:22 Leo__ I have done remove file on all my glusterfs servers via “rm -rf “. the servers are processing glusterfs. (not mounted server. just process server)
09:22 Leo__ I know that it’s my fault.. whatever, it make some problem with disk usage.
09:23 Leo__ https://www.irccloud.com/pastebin/yNgrcDoK
09:24 Leo__ I remove about 20TB data, however it returned just 2GB.
09:24 Leo__ I KNOW THAT IT was LINKED FILE.
09:24 Leo__ how can I fix it? please let me know solutions if you guys know.
09:25 Leo__ somebody help me..
09:27 atalur joined #gluster
09:28 kovshenin joined #gluster
09:34 aravindavk joined #gluster
09:39 partner_ hmm gluster volume status myvol detail does not report details with 3.6.1 for volumes running on 3.4.5.. plain status works
09:40 jiffin joined #gluster
09:43 pcaruana joined #gluster
09:44 haomaiwa_ joined #gluster
09:44 deepakcs joined #gluster
09:46 lalatenduM_ joined #gluster
09:50 partner_ cli.log does not reveal any errors.. [cli-rpc-ops.c:6548:gf_cli_status_cbk] 0-cli: Received response to status cmd
09:50 rafi1 joined #gluster
09:50 partner_ but results empty output
09:51 Debloper joined #gluster
09:56 lalatenduM joined #gluster
10:20 rafi joined #gluster
10:37 ghenry joined #gluster
10:37 ghenry joined #gluster
10:40 ricky-ti1 joined #gluster
10:50 badface joined #gluster
10:50 badface Hi all
10:50 badface there is someone that uses glusterfs on Amazon AWS ?
10:51 ama Hi there.
10:51 badface I've a problem in the boot phase
10:51 badface version is 3.6.1
10:52 badface configuration is very easy, 2 amazon linux instance in different availability zone, with a replica volume
10:53 badface when I start the server, gluster volume doesn't mount at boot on both server
10:53 badface but if i do a stop/start of the volume
10:54 badface and try again the mount works
10:54 badface is it a normal behaviour ?
11:08 partner_ i would guess its related more to a order things get up on the boot, perhaps gluster mounts are attempted before all the necessary components are up?
11:09 prasanth_ joined #gluster
11:09 partner_ most likely if you just try to mount them again they will work ie. no volume restart required
11:17 badface I'm not sure because if i try without stop and start the mount command hang until a time-out...
11:27 harish_ joined #gluster
11:40 partner_ any hints on the logs?
11:40 partner_ oh, left already, i've ignored all the millions of joins and parts..
11:42 bala joined #gluster
11:42 fandi joined #gluster
11:54 lalatenduM joined #gluster
11:59 diegows joined #gluster
12:11 itisravi_ joined #gluster
12:13 itisravi joined #gluster
12:18 calum_ joined #gluster
12:27 lalatenduM_ joined #gluster
12:30 lalatenduM_ joined #gluster
12:39 dusmant joined #gluster
12:43 pcaruana joined #gluster
12:43 psilvao1 joined #gluster
12:46 psilvao1 Hi Guys!, I need to know it's possible make a directory as brick of gluster volume, because all the documentation said we must to use a disk partition or specific disk.. thanks in advance
12:50 partner_ sure you can
12:54 psilvao1 Ok Partner_ thanks
12:59 edwardm61 joined #gluster
13:06 calisto joined #gluster
13:13 bala joined #gluster
13:29 fandi joined #gluster
13:35 fandi joined #gluster
13:44 plarsen joined #gluster
13:50 kumar joined #gluster
13:55 shubhendu joined #gluster
14:07 Fen1 joined #gluster
14:13 virusuy joined #gluster
14:18 psilvao joined #gluster
14:31 lmickh joined #gluster
14:33 NuxRo joined #gluster
14:41 msmith_ joined #gluster
14:41 NuxRo joined #gluster
14:47 Pupeno joined #gluster
14:49 lalatenduM joined #gluster
14:56 badface joined #gluster
15:05 badface partner: sorry for disconnect
15:09 RameshN joined #gluster
15:13 wushudoin joined #gluster
15:15 kovsheni_ joined #gluster
15:30 shubhendu joined #gluster
15:38 sysadmin-di2e badface: do you have glusterfs mounts labeled as _netdev in the fstab file?
16:00 nbalacha joined #gluster
16:07 badface sysadmin-di2e: yes, _netdev is on fstab
16:10 strata badface: not on AWS, but I am doing gluster at boot on another cloud and _netdev was hit and miss for me. I ended up using autofs instead. Something to think about if your /etc/fstabs don't work 100% of the time.
16:12 badface strata: what is strange is if i do a stop/start of the gluster volume and try again the mount command its work without a problem...
16:13 strata yeah but when you first boot up, it sometimes takes too long and systemd or init or whatever times out trying to mount it and moves on.
16:14 dberry joined #gluster
16:14 dberry joined #gluster
16:15 lalatenduM joined #gluster
16:16 strata autofs is much cleaner way of doing things like nfs or gluster mounts anyway because it can handle unexpected detachments and unmount the volume, then remount automatically when it comes back.
16:21 kovshenin joined #gluster
16:22 hagarth joined #gluster
16:26 suman_d_ joined #gluster
16:26 calisto joined #gluster
16:26 Gilbs joined #gluster
16:31 lalatenduM_ joined #gluster
16:36 misko_ joined #gluster
16:37 lala__ joined #gluster
16:37 CP|AFK joined #gluster
16:39 lala__ joined #gluster
16:43 Gilbs I enabled geo-replication last week and noticed only a few folders were replicated to the slave server.  I started and stopped everything, but only new items are replicated not any current folders/files.  Any ideas?
16:44 badface joined #gluster
16:46 Gilbs3 joined #gluster
16:49 hagarth Gilbs3: better to send out a mail on gluster-users with as many details about the problem.
16:50 calisto1 joined #gluster
16:50 vimal joined #gluster
16:56 Gilbs joined #gluster
17:00 kovshenin joined #gluster
17:06 calisto joined #gluster
17:18 coredump joined #gluster
17:33 lmickh_ joined #gluster
17:40 Danishman joined #gluster
17:42 AGTT joined #gluster
17:42 Telsin joined #gluster
17:47 AGTT Hi. Would this an appropriate place to discuss an issue?
17:49 Gilbs Yes, but it's pretty dead so far today.
17:49 soumya_ joined #gluster
17:49 semiosis hi
17:49 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:49 semiosis AGTT: ^^^
17:50 AGTT Should I try anyway? -- OK, sure; I understand.
18:05 JoeJulian Gilbs: What version?
18:06 coredump joined #gluster
18:15 edong23_ joined #gluster
18:15 calisto joined #gluster
18:16 elico joined #gluster
18:16 _br_- joined #gluster
18:17 nocturn00 joined #gluster
18:18 AGTT Setup: I have two (physical) machines that I had setup some time ago. They have just two replicated Gluster volumes. Issue: Recently, I have noticed that when one of the hosts is offline, both of the volumes become inaccessible to the already mounted mountpoints on the servers themselves: A listing command for any of the volumes, or `mountpoint <mountpoint>` hangs until some seconds after both hosts* comes online again. This happens on
18:20 JoeJulian Line was too long for IRC. Everything after "This happens on a" is not there.
18:20 JoeJulian The answer is usually iptables and/or hostnames.
18:21 AGTT pacman -Ss glibc : "core/glibc 2.20-6 (base) [installed]"
18:21 AGTT I can ping each machine from the other
18:21 AGTT ...sorry, fixing it now
18:22 JoeJulian Oooh, arch.
18:22 AGTT This happens on any of the servers. If I unmount a volume before I disconnect one of the hosts and remount it while disconnected, the volume mounts successfully and the data is accessible; but if I reconnect and re-disconnect, `ls` hangs again. If I mount the volume on a client, and disconnect a server, the client can access the volume. I have also noticed that during the rebooting of one host, the other can access the volume before glu
18:22 AGTT The logs for the volume say "[...] server <IP>:<Port> has not responded in the last 42 seconds, disconnecting.", which, something like this I would expect... There are also two message : about a missing entry: "[...] 0-data-dht: Entry /WWW/META-INF missing on subvol data-replicate-0" ; and another about a possible split-brain file, but `heal` commands show `0 entries`. I have just tried with the firewall off, but it did not change anyth
18:22 AGTT I am using Arch (x64) for all machines with Gluster 3.6.1 (Arch: 3.6.1-1) -- and another thing: gluster does not show what version it is from the CLI: `gluster --version | head -1` : "glusterfs  built on [...]" (notice the two consecutive spaces at the missing version). Thank you for your time reading all this.
18:23 JoeJulian That's the packager's fault.
18:23 AGTT For gluster not displaying its version?
18:23 AGTT I guess I could reinstall if I have to.
18:24 JoeJulian So what's not happening, afaict, is the clients are never initially connecting to each server, probably only connecting to one.
18:25 AGTT sorry, what does afaict mean? I'm not very much into acronyms... or is it just a typo?
18:25 JoeJulian Sergej isn't responsive. Since I'm no longer the only Arch user, I guess I'll step up my game and make our own official repos.
18:25 JoeJulian As far as I can tell.
18:26 AGTT I was thinking of that myself.
18:26 AGTT But I was planning it for later...
18:27 nocturn joined #gluster
18:27 hchiramm_ joined #gluster
18:27 AGTT So, the official gluster package would not be good enough?
18:27 wgao joined #gluster
18:28 JoeJulian Only in that Sergej doesn't set the version string. That's not a huge deal.
18:28 AGTT By reinstalling, I meant the OS, and not the package; I just realized that it's ambiguous.
18:29 AGTT If that's the only thing, it OK for me.
18:29 JoeJulian I wouldn't bother reinstalling. I would check iptables, any hardware firewalls that may be in play, client logs, maybe even a tcpdump.
18:30 edong23 joined #gluster
18:36 hchiramm joined #gluster
18:58 calisto joined #gluster
18:58 badone joined #gluster
19:05 PeterA joined #gluster
19:06 PeterA i still having a GFS directory with quota mismatch
19:06 PeterA how do i find out which file is hogging the quota?
19:10 AGTT I think that you can use `du`.
19:11 PeterA i can pin down a directory hogging the quota
19:11 PeterA but the sum of the filesize is diff to the directory quota usage
19:12 PeterA actually
19:12 PeterA the size of du is correct
19:12 PeterA but the quota for the gfs directory is wrong
19:13 roost joined #gluster
19:14 roost Is it possible to update gluster from 3.5.2 to 3.6.1 in middle of rebalancing?
19:15 JoeJulian I wouldn't.
19:17 roost We've been trying to rebalnce for a month now and at first we thought it was just going slow but upon further inspection it keeps getting stuck on one folder.
19:17 roost We do stop rebalancing during the day
19:17 roost because it makes everything slow
19:18 JoeJulian If rebalance is aborted, then sure. Just not while it's actively running.
19:19 JoeJulian I wouldn't put a lot of faith in that curing a rebalance though. If you try it and it does, *please* let me know.
19:20 roost do you have any suggestions then?
19:21 roost we ran a manual self heal on that folder and it didn't seem to fix anything
19:21 AGTT Is the folder that is getting stuck on large?
19:21 JoeJulian I do a fix-layout and hope...
19:21 roost it's stuck on fix-layout
19:22 JoeJulian That's odd. That's usually really quick.
19:22 roost every night the rebalncing does nothing because it is still in the fix-layout stage but when we check the log it says fix-layout failed for the same folder
19:23 AGTT I was thinking, if possible, to move the folder to a temporary place, away from the volume, rebalance, and then move the folder back.
19:24 AGTT Are there any file conflicts or split-brain entries?
19:24 JoeJulian From a client, for the folder that's failing as $folder, "setfattr -n trusted.distribute.fix.layout -v 1 $folder". Assuming you get the same error, paste the error and the few seconds surrounding it to fpaste.org so I can take a look.
19:24 JoeJulian The error should be in your client log.
19:25 roost so if I read that correctly it's going to manually run fix layout for just that folder ?
19:27 jobewan joined #gluster
19:28 roost so I ran the command which log file is it supposed to update ?
19:28 roost or do I have to run the rebalance gluster command afterwards ?
19:29 AGTT PeterA: In the documentation, is mentions "quota-deem-statfs on" do you have that on or off? This is where I have fund it: https://access.redhat.com/documentation/en-US/Re​d_Hat_Storage/2.1/html/Administration_Guide/chap​-User_Guide-Dir_Quota-Display.html#idp15727656
19:29 PeterA yes i already have that turned on
19:31 JoeJulian roost, yes. It will manually fix-layout for that one folder. The log is the client log file which is named based on where you mount it residing in /var/log/glusterfs.
19:31 PeterA http://pastie.org/9804037
19:32 PeterA the quota is showing using 920GB
19:32 PeterA but the actual du is just 116G
19:32 PeterA and the subdirs shows the du
19:32 JoeJulian open deleted file?
19:32 PeterA wonder how to recover the quota from gluster
19:33 roost JoeJulian, i looked at the log and there doesn't seem to be any entries with the path I put in
19:33 roost there are a bunch of disk layout missing entries in the log file for other paths but the folders it refers to are tiny and my guess is the fix layout stage hasn
19:33 roost 't even reached them yet
19:34 JoeJulian Well that's useless..
19:35 JoeJulian Maybe do a "find $clientmount -type d -print0 | xargs -0 setfattr -n trusted.distribute.fix.layout -v 1"
19:36 roost hmm we do have A LOT of directories
19:36 JoeJulian I would create a mount especially for that command so I don't have to worry about a production client having a problem.
19:38 roost we have like 18,000 directories but i dont even remember the last count but probably way over 5 million files
19:43 roost though the interesting thing is it just constantly repeats the same disk layout missing messages in a loop every second
19:48 roost i think i am retarded lol im sure i figured it out
19:49 roost we will find out by tomorrow
19:49 PeterA anything else could cause the quota mismatch?
19:50 PeterA i looked into inodes of each files
19:50 PeterA in that gfs directory
19:50 PeterA no hard links
19:50 AGTT PeterA: Sorry, I don't really know what could be causing the difference. I don't use quotas, so I don't have experience using that. I'll keep trying to figure it out, however. For me, for some problems, it worked to restart the volume (along with the glusterd on all servers).
19:50 PeterA wonder why gluster is counting it wrong
19:51 PeterA i will need a downtime to try that
19:51 PeterA just wonder why only that one directory is happening on the mismatch
19:51 shaunm joined #gluster
19:52 AGTT Did you find anything in the logs about the mismatch? Any warnings, even?
19:53 AGTT Anything that you think might be related?
19:53 PeterA not at this point
19:55 PeterA i tried to remove path and limit-usage the path again …. no luck...
19:56 AGTT I read for an earlier version, something along the lines that if you exceed the (hard?) quota ... something happens, I forgot what that would be. But, you haven't. And I don't think that it is still valid for this version.
19:56 PeterA right
19:57 calisto joined #gluster
19:57 PeterA is there a way to ask gluster to rescan/recalculate quota of a path?
19:58 AGTT I don't know.
19:58 AGTT Maybe JoeJulian might know.
19:59 PeterA wonder how to kick off a quota crawl
20:01 JoeJulian No clue
20:01 JoeJulian Not even sure that it can be done.
20:01 JoeJulian I would need to dig through the source to see where it stores quota, and how.
20:04 PeterA if there is an xattr on file related to quota?
20:06 JoeJulian I'm guessing there's more than just an xattr, otherwise restarting a brick would take forever.
20:08 ricky-ticky joined #gluster
20:09 PeterA the volume is a dist only
20:09 PeterA and i wonder how the quota got calculated by crawling all the tiles
20:09 PeterA files
20:11 JoeJulian True, that does happen when you first enable quota, doesn't it...
20:11 PeterA ys
20:11 PeterA yes
20:12 PeterA wonder if there is a way to just ask gluster to crawl a particular path
20:15 JoeJulian In the comments, "all quota xattrs can be cleaned up by doing setxattr on special key." but I don't know what key...
20:15 PeterA trusted.glusterfs.quota.size ??
20:18 JoeJulian trusted.glusterfs.quota* is filtered out for all clients with a pid >= 0. That allows the self-heal, rebalance, etc, to use those keys as they present a negative pid.
20:21 jobewan joined #gluster
20:27 PeterA http://pastie.org/9804112
20:27 PeterA is there a way to translate the size?
20:28 cfeller joined #gluster
20:29 jobewan joined #gluster
20:31 calisto joined #gluster
20:31 Gilbs I started  geo-replication on a pair of established servers and noticed that nothing but new files/folders are being replicated.  Is there a way to tell geo-replication to replicate everything instead of anything new?
20:32 JoeJulian getfattr -n trusted.glusterfs.quota.size -e hex iqqa01/
20:32 PeterA trusted.glusterfs.quota.size=0x000000e637a1c000
20:33 JoeJulian So that would be 988775825408 bytes
20:35 PeterA that's exactly the quota displayed!
20:37 PeterA the getfattr only work against directory?
20:37 PeterA hmm…or the trusted.glusterfs.quota.size attr only exist on directory?
20:38 PeterA doesn't seems like exist for files
20:38 JoeJulian Would make sense
20:42 PeterA do u think setafattr the trusted.glusterfs.quota.size would let gluster recalculate the size?
20:42 JoeJulian No, I think it would change it.
20:42 PeterA if put it to a negative number?
20:43 JoeJulian I don't think it would really hurt to try it. You're not replicated so you don't have to worry about split-brain.
20:53 PeterA or u think do a remove on the trusted.glusterfs.quota.size of that directory would trigger the quota to recrawl?
20:58 diegows joined #gluster
21:02 JoeJulian I just realized that directories exist on every brick. They would all need to be affected.
21:02 jobewan joined #gluster
21:05 PeterA meaning shouldn't setfattr on that directory.....
21:06 JoeJulian Not unless you do it to every brick.
21:06 PeterA ic
21:06 PeterA what if i do it on a gfs client
21:06 JoeJulian it won't pass through.
21:08 JoeJulian aha, marker-quota... now... what use is that. :/
21:20 JoeJulian Looks like the marker directory (under .glusterfs) works on timestamps
21:21 PeterA timestamps of the gfs directory ?
21:26 ttkg joined #gluster
21:30 JoeJulian I don't have a volume with quota enabled. Something under .glusterfs with the word "marker" or "quota" in it.
21:31 JoeJulian A directory, that is.
21:35 PeterA hmm…no such a dir...
21:37 PeterA http://pastie.org/9804222
21:40 systemonkey joined #gluster
21:54 Gilbs left #gluster
22:27 diegows joined #gluster
22:28 msmith_ joined #gluster
22:33 kovshenin joined #gluster
22:40 tessier_ I know many small files are a worst case situation for gluster just how bad would a couple million files of 260k average size be? Availability and manageability are more important than performance in this case. The files are not frequently accessed such as in a mail spool.
22:41 tessier_ I'm considering gluster or mogilefs. gluster would be easier to integrate into the web app which needs clustered access to these files since it is still accessible as a filesystem.
22:42 side_control joined #gluster
22:45 JoeJulian should be fine
22:53 Pupeno joined #gluster
23:01 squizzi joined #gluster
23:04 side_control joined #gluster
23:10 semiosis wow mogilefs is still around
23:10 side_control joined #gluster
23:18 hchiramm joined #gluster
23:27 roost joined #gluster
23:36 hchiramm joined #gluster
23:37 msmith_ joined #gluster
23:44 tessier_ semiosis: yeah, it is! But it has an http/ReST API which is good but in this case would require some signficant code changes in our app to pull the files via HTTP rather than just read it off of a seemingly normal filesystem such as gluster presents.
23:44 tessier_ JoeJulian: Thanks! I have a feeling gluster is the way we're going to go with this.
23:44 semiosis srsly, would you rather a) run perl in production, or b) use S3, if you need http object storage
23:44 semiosis lol
23:45 semiosis the days of running perl in production are behind us, i hope
23:46 taea00 joined #gluster
23:48 taea00 Good evening.  If I set up glusterfs on two systems, have a primary with IP for NFS, then can do set up glusterfs to do failover of the IP to keep NFS up?  Or do I need something like Congas through RH?
23:48 Pupeno joined #gluster
23:55 daMaestro joined #gluster
23:58 semiosis taea00: glusterfs doesnt do IP failover
23:58 semiosis taea00: it's a virtual filesystem
23:59 semiosis if you use the glusterfs native FUSE client (or libgfapi aware apps) then you get HA for free.  if you use NFS clients with the gluster-nfs server you'll probably want to set up a VIP
23:59 semiosis people often use ctdb, or corosync, for that, afaik
23:59 semiosis but i never have

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary