Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 georgeh-LT2 joined #gluster
00:38 jaank joined #gluster
00:45 elico left #gluster
00:45 B21956 left #gluster
00:53 plarsen joined #gluster
00:59 topshare joined #gluster
01:11 calisto joined #gluster
01:47 elico joined #gluster
01:50 cmtime I cant seem to solve  0-management: Initialization of volume 'management' failed, review your volfile again  after a reboot
01:59 haomaiwa_ joined #gluster
02:10 doekia joined #gluster
02:45 elyograg left #gluster
02:45 meghanam joined #gluster
02:45 meghanam_ joined #gluster
02:46 hagarth joined #gluster
03:02 topshare joined #gluster
03:26 soumya_ joined #gluster
03:34 kanagaraj joined #gluster
03:44 bharata-rao joined #gluster
03:51 kdhananjay joined #gluster
03:52 _Bryan_ joined #gluster
03:57 smohan joined #gluster
04:03 bala joined #gluster
04:11 ppai joined #gluster
04:11 eshy joined #gluster
04:14 eshy joined #gluster
04:16 nbalacha joined #gluster
04:27 bala joined #gluster
04:30 topshare joined #gluster
04:32 jiffin joined #gluster
04:34 anoopcs joined #gluster
04:40 RameshN joined #gluster
04:42 topshare joined #gluster
04:49 rafi1 joined #gluster
04:51 nishanth joined #gluster
04:52 y4m4 joined #gluster
04:54 bharata-rao joined #gluster
05:03 rjoseph joined #gluster
05:13 hagarth joined #gluster
05:24 kanagaraj joined #gluster
05:25 saurabh joined #gluster
05:29 meghanam joined #gluster
05:29 meghanam_ joined #gluster
05:29 poornimag joined #gluster
05:42 hagarth joined #gluster
05:45 overclk joined #gluster
05:49 ramteid joined #gluster
05:51 soumya_ joined #gluster
06:12 aravindavk joined #gluster
06:16 raghu` joined #gluster
06:23 topshare joined #gluster
06:28 atalur joined #gluster
06:38 edong23 joined #gluster
06:43 andreask left #gluster
06:43 hagarth joined #gluster
06:47 Philambdo joined #gluster
06:47 bala joined #gluster
07:03 ctria joined #gluster
07:08 topshare joined #gluster
07:15 SOLDIERz joined #gluster
07:19 SOLDIERz joined #gluster
07:20 nbalacha joined #gluster
07:24 rgustafs joined #gluster
07:25 faizan joined #gluster
07:26 topshare joined #gluster
07:29 kovshenin joined #gluster
07:33 atalur joined #gluster
07:45 atalur joined #gluster
07:45 glusterbot News from newglusterbugs: [Bug 1170942] More than redundancy bricks down, leads to the persistent write return IO error, then the whole file can not be read/write any longer, even all bricks going up <https://bugzilla.redhat.co​m/show_bug.cgi?id=1170942>
08:00 nbalacha joined #gluster
08:14 LebedevRI joined #gluster
08:18 jtux joined #gluster
08:20 ricky-ti1 joined #gluster
08:21 tvb joined #gluster
08:21 tvb guys I need urgent help
08:22 tvb mount -t glusterfs file1:/STATIC-DATA /home/sas/domains/domain.com/files
08:22 tvb Mount failed. Please check the log file for more details.
08:22 tvb I can ping file1
08:22 tvb statis of gluster volume is started
08:22 tvb however showmount -e static-data
08:22 tvb clnt_create: RPC: Unknown host
08:22 tvb what is causing this?
08:24 jiffin just flush ur iptables
08:24 jiffin iptables -F
08:24 tvb oh sorry I read the guide wrong
08:24 tvb I should do
08:24 tvb root@app1:~# showmount -e file1
08:24 tvb Export list for file1:
08:24 tvb but I still don't see an export
08:25 tvb there are no iptables running
08:26 jiffin showmount -e <ip address>
08:26 ndevos tvb: you should have a log with some details, something like /var/log/glusterfs/home-sas-​domains-domain.com-files.log
08:26 tvb jiffin: ok ill try
08:26 tvb ndevos: ill check
08:26 ndevos tvb: and, on the gluster server, you can check for the processes with: gluster volume status STATIC-DATA
08:26 tvb jiffin:
08:26 tvb root@app1:~# showmount -e 192.168.0.200
08:26 tvb Export list for 192.168.0.200:
08:27 tvb ndevos: https://gist.github.com/tvb/a1f59d26644ad37a9bb3
08:27 ndevos showmount is NFS specific, if you have disabled the nfs-server. the volume will not be listed in showmount
08:28 ndevos or, in case you do not have rpcbind running, the nfs-server can not register the MOUNT service
08:28 tvb ndevos: found the log
08:28 tvb [2014-12-05 09:13:25.085519] E [client-handshake.c:1717:client_query_portmap_cbk] 0-STATIC-DATA-client-0: failed to get the port number for remote subvolume
08:28 ndevos tvb: there should be brick processes in that output, they seem to be missing
08:29 tvb [2014-12-05 09:13:25.082448] E [name.c:245:af_inet_client_get_remote_sockaddr] 0-STATIC-DATA-client-1: DNS resolution failed on host file2.<domain>
08:29 tvb is this critical? It is the second glusterfs server
08:30 ndevos well, the client side needs to talk to the brick processes, if those are not runing, very little will work
08:30 ndevos s/runing/running/
08:30 glusterbot What ndevos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
08:30 ndevos uh, no, that is not what I wanted to day
08:30 tvb ndevos: https://gist.github.com/tvb/6438d6718c88033cfc67
08:30 ndevos s/day/say/
08:30 glusterbot What ndevos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
08:30 ndevos glusterbot: you're broken.
08:31 ndevos tvb: can you get 'gluster volume info STATIC-DATA' too?
08:31 tvb on app1?
08:32 ndevos doesnt matter where
08:32 tvb https://gist.github.com/tvb/cc69eecd92cf3c7182c2
08:33 ndevos on your file1, there should be a log /var/log/glusterfs/bricks/data-export1.log - that should contain some messages why the brick process failed to start (or got stopped)
08:34 tvb there is no such log
08:36 ndevos do you have any logs under /var/log/glusterfs/bricks/?
08:36 tvb no
08:37 ndevos hmm, okay, glusterd is the process that should start the processes, that log is /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
08:38 tvb yes ill gist it
08:38 ndevos is thare a gist command, similar to ,,(paste)?
08:38 glusterbot For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
08:39 tvb ndevos: first etc-glusterfs-glusterd.log is saying this https://gist.github.com/tvb/e2b1a419d9299d114c30
08:39 ghenry joined #gluster
08:40 tvb I fixed that by editting the hosts file
08:40 tvb now it is saying
08:40 tvb https://gist.github.com/tvb/5659e5d56bb8cdd9ee81
08:43 tvb glustershd.log is outputting
08:43 tvb [2014-12-05 09:42:51.345432] I [client.c:2090:client_rpc_notify] 0-STATIC-DATA-client-0: disconnected
08:43 ndevos glustershd.log is for the self-heal-daemon, it is an other glusterfs client, similar to the fuse mount
08:44 tvb ok so not important now
08:44 ndevos when you fixed the hostname resolution, did you restart glusterd?
08:44 tvb hmm
08:44 tvb yes
08:44 tvb did the mount again
08:44 tvb seems it is mounted now
08:44 ndevos ah, good :)
08:44 tvb file1:/STATIC-DATA on /home/sas/domains/domain.com/files type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
08:45 ndevos remember that all hostnames from the servers that host bricks need to get resolved on the clients too - the client receive the 'gluster volume info' details of the volume, and will contact the bricks directly
08:46 glusterbot News from newglusterbugs: [Bug 1170958] Request removal of duplicate Gerrit account for ndevos <https://bugzilla.redhat.co​m/show_bug.cgi?id=1170958>
08:46 glusterbot News from newglusterbugs: [Bug 1170966] glusterbot has a broken regex s/substituion/replacing/ command <https://bugzilla.redhat.co​m/show_bug.cgi?id=1170966>
08:46 tvb ndevos: yeah. I have no idea why it was missing from the hosts file
08:47 ndevos tvb: okay, well, at least you found the issue, and I'm sure you'll figure it out next time much quicker if it happens again
08:48 tvb ndevos: Yeah im sure. Thanks
08:50 ndevos you're welcome!
08:50 rgustafs joined #gluster
08:51 vimal joined #gluster
08:52 tvb ndevos: is this intented behaviour?
08:52 tvb /var/log/glusterfs/home-sas-dom​ains-sendasmile.com-files.log: [2014-12-05 09:49:31.903416] E [socket.c:1715:socket_connect_finish] 0-STATIC-DATA-client-0: connection to 192.168.0.200:24009 failed (Connection refused)
08:52 tvb this is app2
08:53 tvb app2 has the mount active
08:53 tvb so it is working
08:53 tvb just don't know what the log entry means
08:54 ndevos tvb: that means that app2 can not connect to the process on 192.168.0.200, tcp-port 24009 - I guess that is one of the brick processes
08:55 ndevos 'gluster volume status STATIC-DATA' should show the ports for each brick
08:55 tvb yes; Brick file2.domain.com:/data/export224009Y1662
08:55 ndevos do you have a firewall?
08:55 tvb no
08:57 ndevos maybe drop this restriction from the volume? auth.allow: 192.168.0.*
08:57 ndevos 'gluster volume reset STATIC-DATA auth.allow' would do that
08:58 tvb ok ill check into that
08:59 tvb it might be intentionally but I have not configured it so im not sure
08:59 sage_ joined #gluster
09:04 malevolent joined #gluster
09:05 tvb ndevos: got a question
09:05 tvb the mount is now set as file1:/STATIC-DATA on /home/sas/domains/sendasmile.com/files type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
09:06 tvb so it connects with file1
09:06 tvb what happens if file1 is MIA? Im guessing glusterfs cannot connect and therefor the mount is broken even when we have a file2
09:07 mhi^ joined #gluster
09:07 ndevos it connects with file1 and received the volume definition (similar to 'gluster volume info'), after that, it connects to all the bricks directly (by hostname/ip in that definition)
09:08 tvb and if file is not around/
09:08 ndevos when one brick (or server) goes down, it will still be able to use the other server
09:08 ndevos the SPOF is when mounting, but you can pass a 2nd server as backup over a mount option
09:08 tvb even if the actual mount is "mount -t glusterfs file1:/STATIC-DATA /home/sas/domains/sendasmile.com/files"
09:09 ndevos in that case, file1 is only used to get the volume definition, there is no requirement for file1 to have any of the bricks for the STATIC-DATA volume
09:10 ndevos the volume definition looks like this: https://gist.github.com/tvb/6438d67​18c88033cfc67#file-gistfile1-txt-L8
09:12 ndevos and /sbin/mount.glusterfs is a script, it should list the option to pass a backup-server on the mount/fstab line
09:13 tvb the mount line is available in the rc.local file
09:15 tvb ndevos: so I understand that file1 is used to get the volume definitions but when file1 is not around there is no backup available at this point
09:15 tvb there should be a backup-server present in the mount line itself?
09:16 glusterbot News from newglusterbugs: [Bug 1051992] Peer stuck on "accepted peer request" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1051992>
09:18 ndevos tvb: yes, something like "mount -t glusterfs -o backup-server=file2 file1:/STATIC-DATA /mnt"
09:18 tvb mount -t glusterfs server1,server2,.. serverN:/<volname> \ <mount_point>
09:19 ndevos no, "backup-server" is really like a mount option
09:20 tvb if I read this page right http://blog.gluster.org/category/mount-glusterfs/
09:20 tvb I could do both
09:20 tvb before "Deprecation" chapter
09:22 liquidat joined #gluster
09:23 kumar joined #gluster
09:24 Fen2 joined #gluster
09:25 ndevos tvb: note that the backup-server mount option did change in some release, you need to check what the option in your version of /sbin/mount.glusterfs is
09:26 tvb hmm funny
09:26 tvb https://gist.github.com/tvb/a38983d0a8b5f57daef1
09:31 tvb So no way to tell
09:32 Norky joined #gluster
09:32 tvb Don't see it in the code either
09:33 monotek joined #gluster
09:38 hagarth joined #gluster
09:38 partner morning. can i do targetted fix-layout somehow? i calculated it'll take 45 days for "normal" to finish up while logs are flooded just with couple of dirs..
09:42 tvb ndevos: root@app1:~# glusterd --version = glusterfs 3.3.0
09:42 nbalacha joined #gluster
09:42 tvb same for glusterfs
09:44 necrogami joined #gluster
09:45 tvb ndevos: is backup-server option available in 3.3.0
09:46 ndevos tvb: I'm not sure, if there is no hit for 'grep backup /sbin/mount.glusterfs' then I guess not
09:46 ndevos also, 3.3 does not receive updates anymore, you should think about upgrading to 3.6, 3.5 or 3.4
09:47 tvb ok
09:49 tvb root@app1:~# grep backup /sbin/mount.glusterfs
09:49 tvb if [ -n "$backupvolfile_server" ]; then
09:49 tvb cmd_line1=$(echo "$cmd_line --volfile-server=$backupvolfile_server");
09:49 tvb "backupvolfile-server")
09:49 tvb backupvolfile_server=$value ;;
09:50 ndevos tvb: right, so you can "mount -t glusterfs -o backupvolfile-server=file2 file1:/STATIC-DATA /mnt'
09:51 tvb cool
09:51 tvb thanks
09:57 [Enrico] joined #gluster
10:04 atalur joined #gluster
10:14 bala joined #gluster
10:16 glusterbot News from newglusterbugs: [Bug 1171048] Timestamp difference on replica <https://bugzilla.redhat.co​m/show_bug.cgi?id=1171048>
10:16 aravindavk joined #gluster
10:19 nbalacha joined #gluster
10:26 cmtime joined #gluster
10:31 SOLDIERz joined #gluster
10:33 bala joined #gluster
10:44 faiz joined #gluster
10:46 ricky-ticky1 joined #gluster
10:56 elico joined #gluster
11:27 hagarth joined #gluster
11:32 edward1 joined #gluster
11:43 soumya joined #gluster
11:44 deniszh joined #gluster
11:53 pcaruana joined #gluster
11:54 diegows joined #gluster
11:59 calisto joined #gluster
12:03 soumya__ joined #gluster
12:08 tetreis joined #gluster
12:13 hagarth joined #gluster
12:38 misko_ Folks
12:38 misko_ root@xfc2:~# mount| grep " /shared "
12:38 misko_ gfs0:/diskimg on /shared type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072)
12:38 misko_ root@xfc2:~# ls -l /shared
12:38 misko_ ls: cannot access /shared: Transport endpoint is not connected
12:38 misko_ is this a correct behavior?
12:38 misko_ how to correctly recover from that?
12:39 ndevos misko_: that is not correct behaviour, mostly this happens when the glusterfs/fuse mount process crashed or got some other severe issue
12:39 ndevos misko_: you need to unmount and mount again
12:39 misko_ real issue is this
12:39 misko_ [81052.843097] Out of memory: Kill process 4355 (glusterfs) score 116 or sacrifice child
12:39 misko_ [81052.843242] Killed process 4355 (glusterfs) total-vm:294788kB, anon-rss:44148kB, file-rss:0kB
12:40 calum_ joined #gluster
12:40 misko_ But the thing i need to solve is heartbeat RA is unable to detect this situation
12:40 ndevos yes, well a killed process is similar to a crashed one
12:40 misko_ So i need to decide whether to fix RA or do something with gluster
12:41 ndevos you can detect it by running a 'stat -f /shared/some/dir'
12:42 ndevos out of memory does not need to be a gluster issue, but it could point to a memory leak
12:43 misko_ no no it's of course problem of my server
12:43 misko_ pacemaker spawned more domus that was desired
12:44 misko_ but it shoud deterministically detect this situation and apply countermeasures
12:44 misko_ (automagically)
12:44 meghanam joined #gluster
12:44 meghanam_ joined #gluster
12:49 abyss^ I have a question about volumes:) What is better, I have 100 web-apps and static files lie on gluster... It's better to make for every app volume (so 100 volumes) or keep it on one big volume?
12:52 ndevos abyss^: it all depends... but managing fewer volumes is often easier than managing many
12:53 ndevos abyss^: also, if you want to use snapshots (lvm based), each volume needs its own dedicated bricks - with 100 volumes that gets you *many* bricks
12:53 ndevos bricks/logical-volumes
12:53 ndevos and maybe even volumegroups
12:57 Fen1 joined #gluster
13:00 anoopcs joined #gluster
13:01 tetreis guys, when I go to http://www.gluster.org/community/docum​entation/index.php/GlusterFS_Concepts (section "Plus and Minus"), the first Disadvantage states that "If you lose a single server, you lose access to all the files that are hosted on that server. This is why distribute is typically graphed to the replicate translator.". But then the next section ("Replicate") says "Replicate is used for providing redundancy to both storage and generally to availabilit
13:01 tetreis y.". I was thinking replication was default in Gluster, am I wrong? I'll have 2 Gluster nodes serving other machines. What happens if I loose one of then? Am I in trouble? (cc: partner)
13:03 partner tetreis: you will need to define that when creating the volume
13:03 partner its really part of the creation command. and even after the volume has been created you can still change it to one or another
13:03 tetreis hmm
13:03 tetreis got it
13:05 bene joined #gluster
13:10 tetreis partner, ok, after your pointer I think I found good stuff on the "how" on https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_setting_volumes.md - thanks again!
13:12 partner tetreis: i do like to refer to the redhat docs when figuring out the initial things, i pasted yesterday the picture from that manual for you, here is the link to the actual page (which unfortunately seems to be down for me right now): https://access.redhat.com/documentation/en-US/R​ed_Hat_Storage/2.1/html/Administration_Guide/se​ct-User_Guide-Setting_Volumes-Replicated.html
13:12 partner and something aren't exactly as is anymore for more fresh versions of gluster but basics are all there in nice format
13:13 tetreis same here (down)
13:13 partner oh, the link you pasted looks quite nice, with those same images and stuff
13:13 tetreis that's great. I browsed those guides yesterday, but will keep an eye on reloading the page
13:13 partner keep reading that one then
13:13 tetreis heh alright
13:13 poornimag joined #gluster
13:14 partner its just more commercial-looking and a bit better structured imo than the old community docs but lots of improvement on that area i see (not reading manuals too often anymore..)
13:19 abyss^ ndevos: hmmm It would easier to mount because I could mount separate volume for separate web (so there will be order:)). I wondering if it has any impact on performance or so (many volumes). So you suggested to use one volume for each webapp and take care to not to write to directory for other webapp?
13:20 ndevos abyss^: if you mount over nfs you are able to mount a subdir on a volume too
13:25 kovshenin joined #gluster
13:27 calisto1 joined #gluster
13:27 meghanam_ joined #gluster
13:27 meghanam joined #gluster
13:32 partner any means to do a targetted fix-layout?
13:36 d-fence joined #gluster
13:37 abyss^ ndevos: I will mount via glusterfs client:)
13:37 abyss^ why you mentioned nfs?;)
13:38 ndevos abyss^: well, mounting over glusterfs/fuse does not have the feature to mount a subdir :)
13:38 abyss^ ;)
13:39 Alphamax left #gluster
13:39 abyss^ yes, that why I wondering about one volume for one webapp:) NFS client haven't got so many nice features that gluster client has:)
13:42 tetreis partner, one more question: may I run this command from server1 (or server2)? # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
13:42 tetreis or should it be ran from server3, for example?
13:43 SOLDIERz joined #gluster
13:47 partner any box that is part of the trusted peer
13:47 B21956 joined #gluster
13:47 abyss^ tetreis: you can do that from any server which is in storage pool
13:47 partner don't have to be on the actual box to make it happen
13:48 abyss^ oh, partner is a nick name, sorry:D
13:48 partner heh
13:48 partner everybody are welcome to answer :)
13:50 abyss^ ndevos: I just wondering about disadventages and adventages of doing many volumes or not doing;) Like performance, cpus, ram (how much, impact) etc. management (easier or harder), making backups, accessing the datas etc;) Maybe someone has any experience or good advice? As I can uderstand you opt to not doing a lot of volumes, yes?
13:52 partner from management point of view less volumes is way easier.. you would need to provide bricks for each volume, times 100, then create the volumes times 100, create 100 fstab entries and what not
13:55 ndevos abyss^: indeed, I would prefer to have fewer volumes, and have multiple apps on a volume
13:55 poornimag joined #gluster
13:58 chirino joined #gluster
13:59 abyss^ nice arguments:) So maybe I will do two volumes for ACC environment and for PRD environment...
14:00 abyss^ the great disadventages will be that every webapp will see others files and it will be more prone to mistakes...
14:03 partner hmm well that would be almost the same as with separate volumes, its the same webserver anyways using all of those and everything is probably mounted in to same place, just individual subdirs..
14:04 abyss^ no no, webservers will separate, there will be 50-100 webaaps on 50-100 servers (VMs).
14:05 partner ah
14:05 partner thought they were vhosts or something :o
14:07 ndevos hmm, if you have each app in a vm, mounting the volume that contain all the other apps feels a little dirty, yes
14:08 partner yeah, was thinking that too now as i know there's different vms involved
14:09 abyss^ yes. The idea is that: haproxy [dynamic] -> some_of_webapp_server(apache), haproxy [static] -> varnish -> if not in varnish -> haproxy -> some_of_the_webapp(apache) (it will be more complicated, but simplifying it will be something like this;)
14:09 abyss^ ok, sorry my bad:)
14:10 abyss^ So we know more facts now:) Of course gluster will be mounted on webapss(where apache will be). Now what are you thinking about that?
14:12 abyss^ one large storage, many vms (sometimes only I suppose some application will be on one webserver (VM), because not every application will have a lot of requests).
14:13 abyss^ Some of webapps will have a lot of requests and simultaneous users.
14:17 glusterbot News from newglusterbugs: [Bug 1171142] RDMA: iozone fails with fwrite: Input/output error when write-behind translator is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1171142>
14:20 partner abyss^: sorry, don't have out-of-shelf answer to that as i haven't done such..
14:21 SOLDIERz joined #gluster
14:23 abyss^ partner: thank you:)
14:24 partner no point guessing or trying to provide wrong answers, i can only gain blame for such :)
14:24 kshlm joined #gluster
14:26 abyss^ haha, I'm only looking for opinion, of course I will take a decision and resposibility;)
14:32 kshlm joined #gluster
14:32 nbalacha joined #gluster
14:33 plarsen joined #gluster
14:34 partner hmm well if you can manage 100 webapps with all the haproxy and whatnot it shouldn't be that much more effort to take care of gluster mounts either
14:34 partner if i was your customer i would very much like to have my data somewhere "safe" from others in case some of the sites would be taken over
14:38 partner that would allow you also to move services to new locations customer by customer and not everything in one go (ie. dedicated volume instead of one large)
14:40 abyss^ partner: yes, thank you, every opinion is good for me:) That why I rather thinking about separate volumes, but 30-100 volumes it can be hard to provide sufficient infrastructure (every volume takes own cache (memory), and cpus etc..) I not sure about everything;)
14:40 mikedep333 joined #gluster
14:41 dberry joined #gluster
14:43 dberry in remove-brick if it is stopped, can it be restarted and will continue with the scan or start over?
14:50 SOLDIERz joined #gluster
14:53 nishanth joined #gluster
15:01 drankis joined #gluster
15:04 bennyturns joined #gluster
15:07 bala joined #gluster
15:09 wushudoin joined #gluster
15:12 Fen1 joined #gluster
15:13 virusuy joined #gluster
15:27 and` joined #gluster
15:33 jobewan joined #gluster
15:55 failshell joined #gluster
15:56 failshell is it normal for the metadata of a volume to take as much space as the data itself?
15:58 failshell im running out of space on one of my clusters. and i was wondering if there's a way to tidy up the metadata to reduce it's disk footprint
16:01 tetreis joined #gluster
16:05 coredump joined #gluster
16:10 anoopcs joined #gluster
16:11 soumya__ joined #gluster
16:18 bennyturns joined #gluster
16:47 bennyturns joined #gluster
16:49 JoeJulian failshell: metadata is in the inode table. Doesn't really take up any space at all.
16:50 JoeJulian failshell: I assume you're referring to the .glusterfs directory. They're all hardlinks so they also don't take up any additional space.
16:50 failshell JoeJulian: ah
16:50 failshell gotcha
16:50 JoeJulian but the actual stuff called metadata is the extended attributes.
16:51 failshell so i do need to find something else to clean up :)
16:51 failshell thanks for the intel
16:52 JoeJulian "setfattr -n meta -v 'yo dawg'" <- metametadata
17:16 kumar joined #gluster
17:25 anoopcs joined #gluster
17:50 bennyturns joined #gluster
17:54 feeshon joined #gluster
18:00 PeterA joined #gluster
18:07 lmickh joined #gluster
18:07 russoisraeli joined #gluster
18:19 partner whee http://www.meetup.com/RedHa​tFinland/events/218774694/
18:31 vimal joined #gluster
18:43 gothos JoeJulian: Hey, you remember the find from yesterday? I had to adept it to use mindepth insted of */*/* since bash couldn't handle the full replaced expression.
18:43 gothos the bash process was around 5GB RES
18:48 JoeJulian gothos: Good point.
18:51 gothos anyway, it hasn't found anything so far and it's still running. Errors are still being printed.
19:00 lpabon joined #gluster
19:01 diegows joined #gluster
19:01 pkoro joined #gluster
19:01 nshaikh joined #gluster
19:14 rotbeard joined #gluster
19:31 faiz joined #gluster
19:37 m0ellemeister joined #gluster
19:55 coredump joined #gluster
20:16 JoeJulian @force brick
20:16 glusterbot JoeJulian: I do not know about 'force brick', but I do know about these similar topics: 'former brick'
20:16 JoeJulian Darn. I hoped I saved that factoid somewhere.
20:22 tetreis Let's say I have 2 servers on a replication setup. One of them goes down, and the remaining one has writing activities. When the first one comes back up, will some sort of auto-healing start or do I need to take care of it manually (i.e., run a command manually)?
20:25 JoeJulian it will auto start
20:32 tetreis ok, thanks JoeJulian
20:32 tetreis I think firewall is still messing up
20:39 marcoceppi joined #gluster
20:41 tetreis yeah, firewall
20:42 eightyeight joined #gluster
20:50 calisto joined #gluster
20:55 elico joined #gluster
20:58 JoeJulian @which brick
20:58 glusterbot JoeJulian: To determine on which brick(s) a file resides, run getfattr -n trusted.glusterfs.pathinfo $file through the client mount.
21:02 JoeJulian @learn specify brick as You can create a file on a specific dht subvolume by naming it like {filename}@{dht_subvolume}:dht where dht_subvolume is defined in the client vol file in the cluster/distribute section like {volname}-[replicate-]{subvolume_offset}, ie. myvol-replicate-2 will put it on the 3rd pair of replica 2 bricks in a volume named myvol.
21:02 glusterbot JoeJulian: The operation succeeded.
21:34 PeterA1 joined #gluster
21:39 elico left #gluster
21:46 plarsen joined #gluster
21:48 glusterbot News from newglusterbugs: [Bug 1171313] Failure to sync files to slave with 2 bricks. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1171313>
21:55 bennyturns joined #gluster
22:16 PeterA joined #gluster
22:20 calum_ joined #gluster
22:48 elico joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary