Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 harish joined #gluster
00:36 ctria joined #gluster
00:41 diegows joined #gluster
00:46 JustinClift joined #gluster
01:02 badone joined #gluster
01:15 topshare joined #gluster
01:22 haomaiwa_ joined #gluster
01:25 choe joined #gluster
01:27 pradeepto joined #gluster
01:28 russoisraeli joined #gluster
01:30 russoisraeli JoeJulian - thanks a lot for your answer. If I have a replica, and I add 2 more bricks, doesn't automatically become a distributed replica?? or does it need rebalancing to create a replica of distributed files?
01:30 russoisraeli *does it
01:30 haomaiwang joined #gluster
01:37 haomaiw__ joined #gluster
01:44 pradeepto joined #gluster
01:47 russoisraeli In general - the admin guide that I found online covering different translators using pictures, is great, but I think it would be necessary to include information about adding various brick combinations... i.e what's possible, and what's not
01:49 toecutter joined #gluster
01:50 pradeepto joined #gluster
01:52 haomaiwang joined #gluster
01:55 haomai___ joined #gluster
01:56 pradeepto joined #gluster
02:09 pradeepto joined #gluster
02:15 meghanam joined #gluster
02:17 meghanam_ joined #gluster
02:46 ilbot3 joined #gluster
02:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
02:59 n-st joined #gluster
03:00 n-st_ joined #gluster
03:14 plarsen joined #gluster
03:14 cfeller joined #gluster
03:27 kshlm joined #gluster
03:29 dusmant joined #gluster
03:34 badone joined #gluster
03:42 meghanam joined #gluster
03:43 meghanam_ joined #gluster
03:44 kanagaraj joined #gluster
03:44 RameshN joined #gluster
03:59 rejy joined #gluster
04:03 harish joined #gluster
04:13 nbalachandran joined #gluster
04:21 shubhendu joined #gluster
04:24 kdhananjay joined #gluster
04:29 atinmu joined #gluster
04:33 meghanam_ joined #gluster
04:34 meghanam joined #gluster
04:38 ppai joined #gluster
04:40 sahina joined #gluster
04:43 SOLDIERz joined #gluster
04:44 rafi1 joined #gluster
04:44 Rafi_kc joined #gluster
04:45 anoopcs joined #gluster
04:48 soumya__ joined #gluster
04:49 ndarshan joined #gluster
04:51 RameshN joined #gluster
05:09 hightower4 joined #gluster
05:09 prasanth_ joined #gluster
05:14 atalur joined #gluster
05:20 hagarth joined #gluster
05:20 anoopcs joined #gluster
05:22 spandit joined #gluster
05:22 jiffin joined #gluster
05:25 topshare joined #gluster
05:29 gehaxelt joined #gluster
05:30 kumar joined #gluster
05:31 nishanth joined #gluster
05:31 aravindavk joined #gluster
05:33 overclk joined #gluster
05:36 saurabh joined #gluster
05:37 gehaxelt joined #gluster
05:38 overclk joined #gluster
05:38 lalatenduM joined #gluster
05:41 javi404 joined #gluster
05:42 javi404 so, i setup a gluster test environment, I am preparing to stream a movie from a samba share mounted on a node that is a client of gluster, and just exporting that share. What can I expect with node failures?
05:42 dusmant joined #gluster
05:42 javi404 replicated 3 nodes
05:43 javi404 4th box mounting /mnt/gluster from the first gluster node
05:43 javi404 exporting that /mnt/gluster over smb
05:43 javi404 I will be streaming a movie and pulling the first node.
05:44 gehaxelt joined #gluster
05:48 ndarshan joined #gluster
05:48 shubhendu joined #gluster
05:48 nishanth joined #gluster
05:51 bala joined #gluster
05:55 gehaxelt joined #gluster
05:55 ramteid joined #gluster
06:00 javi404 and I like gluster
06:03 gehaxelt joined #gluster
06:13 nshaikh joined #gluster
06:20 shubhendu joined #gluster
06:21 nishanth joined #gluster
06:21 ndarshan joined #gluster
06:23 gehaxelt joined #gluster
06:23 dusmant joined #gluster
06:25 bala joined #gluster
06:29 SOLDIERz joined #gluster
06:30 gehaxelt joined #gluster
06:35 gehaxelt joined #gluster
06:44 soumya joined #gluster
06:50 ctria joined #gluster
06:51 gehaxelt joined #gluster
06:52 R0ok_ joined #gluster
06:56 gehaxelt joined #gluster
06:57 vimal joined #gluster
07:02 gehaxelt joined #gluster
07:04 soumya joined #gluster
07:05 bala joined #gluster
07:09 overclk joined #gluster
07:10 gehaxelt joined #gluster
07:15 aravindavk joined #gluster
07:16 ppai joined #gluster
07:25 ekuric joined #gluster
07:29 elico joined #gluster
07:30 gehaxelt joined #gluster
07:33 topshare joined #gluster
07:34 gehaxelt joined #gluster
07:35 Fen2 joined #gluster
07:35 [Enrico] joined #gluster
07:38 _Bryan_ joined #gluster
07:48 Philambdo joined #gluster
07:51 T0aD joined #gluster
07:53 glusterbot New news from newglusterbugs: [Bug 1162060] gstatus: capacity (raw used) field shows wrong data <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162060>
07:55 ppai joined #gluster
07:58 ricky-ticky joined #gluster
08:06 mbukatov joined #gluster
08:06 ekuric joined #gluster
08:09 liquidat joined #gluster
08:30 ppai joined #gluster
08:33 fsimonce joined #gluster
08:36 ws2k3 hello, i'm pretty new to gluster i am creating a volume that does not have much IO but HA is Realy important should i make 2 machines with both a mirror(raid) and then create one gluster mirror between the 2 servers that i have? or should i make a gluster replicated with 4 disks(and do not use an raid controller)?
08:38 javi404 ws2k3
08:38 javi404 i am not that new to gluster but am using it in AWS
08:38 javi404 I ran some tests in my lab
08:38 javi404 the hell with raid.
08:38 javi404 you just need 2 replicated nodes for HA
08:39 T0aD raid0 ftw
08:39 javi404 once you mount the gluster with the native client, it will get data from either of the ndoes if one fails.
08:40 ws2k3 yes i know but i have 2 ways to go with replicated
08:40 javi404 explain?
08:40 ws2k3 i can make on both servers a mirror(in raid) and 1 gluster brick for that raid(mirror)
08:41 ws2k3 or i can chose Not to use the raid controller and make a replicated volume with 4 bricks(glusterfs)
08:41 ws2k3 i was wondering what would be the best way to go
08:42 ws2k3 i have 2 SSD's in eatch server
08:42 javi404 well it depends
08:42 javi404 if you do raid0 in each server.
08:42 javi404 you will have a N+1 setup
08:42 javi404 if you do raid1 in each, you will have an N+4 setup for data.
08:43 javi404 I would raid0 the SSD in each for performance since you know gluster will be replicated
08:43 javi404 N+1
08:45 ws2k3 so you would say stripe the ssd's and then make a replciated volume over the 2 servers?
08:45 javi404 yes
08:45 javi404 if having a single disk or server die you will be fine.
08:45 vimal joined #gluster
08:46 javi404 nice performance with stripe on ssd
08:46 ws2k3 yes we have alot of IO but we dont have alot of traffic going on
08:46 ws2k3 HA is Realy important preformance dont have to be that great
08:48 bluedusk joined #gluster
08:48 javi404 well if you have lots of IO, how is that not important with regards to performance?
08:49 ws2k3 the volume is used to read and write small images and i think maby a new image got written once a hour or something
08:49 ws2k3 i only have 60 servers checking every minet for new text files
08:50 ws2k3 so the IO will be alot but its does not read/write alot of traffic traffic will be pretty less
08:51 topshare joined #gluster
08:54 ws2k3 if i have a cronjob a script that scans the data every minit can i access the brick directory? or should i access it using a mount?
08:54 javi404 so i learned a few things
08:54 javi404 never access the brick directly
08:54 javi404 always write and read from the mount
08:54 javi404 you can get data from the brick, but save that only for serious emergency.
08:55 javi404 i learned this inheritting a broken gluster
08:55 ws2k3 and is it not stupid to have a glusterfs server mount its own brick ?
08:55 ws2k3 sounds a but unlogical to me but just wondering
08:57 javi404 no its ok
08:57 javi404 you want to mount its own brick
08:57 javi404 but mount -t glusterfs
08:57 javi404 like that
08:57 javi404 GLUSTER-01:/gv0 on /mnt/gluster type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072)
08:57 javi404 not write to the brick directly ever
08:57 javi404 bad things happen
08:58 javi404 fstab: GLUSTER-01:/gv0                         /mnt/gluster            glusterfs  defaults     1 1
08:59 ws2k3 i understand
08:59 ws2k3 and how long are you using glusterfs now ?
08:59 javi404 been working with it for about a year now.
09:00 ws2k3 and are you happy with it? or whats your experience with it
09:04 overclk joined #gluster
09:04 javi404 well because i am using it in amazon AWS, there is no other option for high available NAS type storage.
09:04 javi404 so learned it by force.
09:04 javi404 by n eed.
09:04 javi404 its not pretty when things go wrong, but if you treat it right, its fine.
09:04 javi404 just like a woman.
09:07 ws2k3 ah you mean that in aws the only HA filesystem available is glusterfs ?
09:08 javi404 only NAS options that is widely used.
09:13 [Enrico] ws2k3: if HA is really importnt™ I would go for both raid1 + gluster replica
09:17 javi404 enrico, why?
09:17 javi404 N+1 not good enough?
09:22 ws2k3 i just discovered that i dont have raid controllers in them
09:22 ws2k3 so its going to be a replicated volume with 4 bricks i think
09:24 glusterbot New news from newglusterbugs: [Bug 1162095] gstatus: gstatus fails if the glusterd on local machine is not running. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162095>
09:24 javi404 thats fine.
09:25 javi404 ide sleep good with that.
09:26 Norky joined #gluster
09:29 ws2k3 i can go searching some raid controllers if needed but its hard for me to estemate how much better raid + repl is then only repl with 4 bricks
09:31 javi404 why bother if you don't need them
09:31 javi404 you can do software raid
09:33 ws2k3 javi404 do you mean software raid as in linux make a md raid? or do you mean glusterfs now ?
09:34 javi404 mdraid
09:34 javi404 you have gluster
09:34 javi404 so why worry about what lies under it.
09:34 javi404 whole purpose is to survive server failure.
09:35 ricky-ticky1 joined #gluster
09:40 Norky safety
09:40 Norky what happens if a server fails, then while you are fixing it a disk in another server fails?
09:41 javi404 norkey, then launch a third node, hardware is cheap.
09:42 javi404 I have 3 replicas and that is just in my lab.
09:43 ws2k3 javi404 what is the advantage of mdraid over a 4 brick gluster replicated volume ?
09:52 ws2k3 cause i know when one brick already has new written data this doesnt get transferred automaticly right? when a gluster client asks for that file then it checks up with the cluster in my eyes that means a raid controller with a mirror/stripe and use that as one brick is more efficient then dont use raid and make every disk a brick or am i wrong here ?
09:54 Norky an extra 50% hardware cost for servers with 22TB of disk attached aint my idea of "cheap" ;)
09:55 vimal joined #gluster
10:00 ctria joined #gluster
10:08 soumya joined #gluster
10:13 suliba joined #gluster
10:17 soumya joined #gluster
10:18 ppai joined #gluster
10:18 ws2k3 javi404 what the advantage of mdraid over 4 brick glusterfs replicated ?
10:19 javi404 ws2k3, about to go to sleep
10:19 javi404 last responce.
10:19 javi404 the only advantage I can think of is at the block read / write level under gluster's logic
10:19 javi404 you will get 2x ssd performance on each host
10:20 javi404 if having 2 copies of the data is enough for you, then that is fine.
10:20 javi404 Norky, who the hell said anything about 22TB of storage?
10:20 javi404 did I miss something?
10:20 javi404 g'night
10:21 [Enrico] javi404: sorry for the delay, my boss popped in and we had to go in the server room. Well "good enough" is relative, it depends on how important is your service
10:24 glusterbot New news from newglusterbugs: [Bug 1162119] DHT + rebalance :- file permission got changed (sticky bit and setgid is removed) after file migration <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162119>
10:37 haomaiwa_ joined #gluster
10:51 diegows joined #gluster
10:52 kshlm joined #gluster
10:54 glusterbot New news from newglusterbugs: [Bug 1162125] glusterd can't create /var/run/glusterd.socket when SELinux is in enforcing mode <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162125>
10:57 ctria joined #gluster
11:02 haomai___ joined #gluster
11:10 1JTAAXYYR joined #gluster
11:11 haomaiwa_ joined #gluster
11:24 glusterbot New news from newglusterbugs: [Bug 1157223] nfs mount via symbolic link does not work <https://bugzilla.redhat.co​m/show_bug.cgi?id=1157223>
11:36 glusterbot New news from resolvedglusterbugs: [Bug 1020154] RC bug: Total size of master greater than slave <https://bugzilla.redhat.co​m/show_bug.cgi?id=1020154>
11:43 soumya__ joined #gluster
11:46 SOLDIERz joined #gluster
11:47 geaaru joined #gluster
11:53 geaaru hi, i'm trying to add a node to an active gluster pool but on new node i see messages like these: Could not find peer on which brick x.x.x.x:/brick1 and than node from active node I see State: Probe Sent to Peer
11:54 geaaru When i try to restart glusterd daemon on new node than service go in error. I use gluster 3.5.1. Is there an issue from my side or a bug ? thanks in advance
11:54 glusterbot New news from newglusterbugs: [Bug 1162150] AFR gives EROFS when fop fails on all subvolumes when client-quorum is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162150>
11:56 geaaru when I try to see peer status from new node I see error like this: 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
11:59 meghanam_ joined #gluster
11:59 meghanam joined #gluster
12:08 edward1 joined #gluster
12:11 Fen1 joined #gluster
12:17 soumya__ joined #gluster
12:21 Slydder joined #gluster
12:21 Slydder hey all
12:22 Slydder ndevos: you there?
12:23 edwardm61 joined #gluster
12:30 al joined #gluster
12:30 foster joined #gluster
12:35 ndevos Slydder: I'm just going for lunch, I'll be back later
12:36 rafi1 joined #gluster
12:38 mdavidson joined #gluster
12:40 shubhendu joined #gluster
12:40 dusmant joined #gluster
12:42 Slydder ndevos: just wanted you to know. I have completely given up on guster fuse mounts and an doing a remote gluster nfs mount which is still about 10 times faster than a local fuse mount. not optimal but at least I have performance.
12:43 Rafi_kc joined #gluster
12:43 Slydder s/an/am/
12:43 glusterbot What Slydder meant to say was: ndevos: just wamted you to know. I have completely given up on guster fuse mounts and an doing a remote gluster nfs mount which is still about 10 times faster than a local fuse mount. not optimal but at least I have performance.
12:44 Bardack joined #gluster
12:50 ekuric joined #gluster
12:59 ctria joined #gluster
12:59 anoopcs joined #gluster
13:01 RameshN joined #gluster
13:02 B21956 joined #gluster
13:05 ekuric joined #gluster
13:06 ricky-ticky joined #gluster
13:12 Rafi_kc joined #gluster
13:18 calisto joined #gluster
13:19 LebedevRI joined #gluster
13:21 Thilam joined #gluster
13:37 ricky-ticky joined #gluster
13:41 hagarth joined #gluster
13:42 kanagaraj joined #gluster
13:47 ndevos Slydder: sorry, I need a little more context to remember what you were doing...
13:49 _dist joined #gluster
13:52 kumar joined #gluster
13:54 rafi1 joined #gluster
13:55 virusuy joined #gluster
13:56 aravindavk joined #gluster
13:57 ws2k3 is there a gluster 3.6 debian repository ? i am unable to find it
13:59 ricky-ticky joined #gluster
14:00 rjoseph joined #gluster
14:05 ekuric joined #gluster
14:05 ndevos lalatenduM, Humble, semiosis: ^ do you know the status of that?
14:09 nishanth joined #gluster
14:11 dusmant joined #gluster
14:12 Guest74253 joined #gluster
14:14 plarsen joined #gluster
14:17 lalatenduM ws2k3, ndevos looks like debian is not available for 3.6 yet
14:17 hagarth may be we need to ping pmatthai for debian packages
14:17 ndevos or semiosis?
14:18 lalatenduM ndevos, I think semiosis only does it for Ubuntu
14:18 ndevos I think he *can* for Debian too
14:19 ndevos he is an "uploader", whatever that means - https://tracker.debian.org/pkg/glusterfs
14:19 glusterbot Title: Debian Package Tracker - glusterfs (at tracker.debian.org)
14:19 * ndevos doesn't need to know, no need to explain that ;-)
14:21 lalatenduM kkeithley also wanted to build debian pkgs.
14:21 ctria joined #gluster
14:22 maveric_amitc_ joined #gluster
14:23 lalatenduM hagarth, do you know if Patrick Matthäi  comes to gluster irc channels, or which channel we can find him?
14:24 hagarth lalatenduM: I would expect him in #debian, but don't see him right now.
14:24 lalatenduM hagarth, ok
14:26 lalatenduM hagarth, in long term we (pkg maintainers) should take this up actively, but may not be feasible now (in terms of time vs work)
14:26 lalatenduM ndevos, what say?
14:27 hagarth lalatenduM: yes, a single automated script that produces packages for all distributions would be nirvana!
14:27 lalatenduM :)
14:27 ndevos lalatenduM: I think he's here as the-me?
14:28 julim joined #gluster
14:28 lalatenduM ndevos, yup, looks like that :)
14:29 ws2k3 semiosis created a ppa so yes that for ubuntu only
14:29 coredump joined #gluster
14:29 ndevos lalatenduM: I'm not interested in building packages for Debian, to be really engaged in that, I should be a Debian user too - and I'm not anymore
14:30 ws2k3 well ppa can work on debian only in the repo are only .debs for ubuntu releases
14:30 lalatenduM ndevos, I can understand
14:30 lalatenduM the-me, is that you https://qa.debian.org/developer.p​hp?email=pmatthaei%40debian.org?
14:33 theron joined #gluster
14:33 nbalachandran joined #gluster
14:37 partner +1 for debian :)
14:45 plarsen joined #gluster
14:48 topshare joined #gluster
14:48 topshare joined #gluster
14:49 topshare joined #gluster
14:53 nshaikh joined #gluster
14:54 kshlm joined #gluster
14:56 Durzo joined #gluster
14:57 Durzo Hi all, just upgraded from 3.4 to 3.5 and I notice the part of geo-replication.. it is not clear though whether the new geo-repl still supports local file:// based replication ?
14:59 topshare joined #gluster
15:05 topshare joined #gluster
15:06 julim joined #gluster
15:08 wushudoin joined #gluster
15:11 bene joined #gluster
15:27 DougBishop joined #gluster
15:31 topshare joined #gluster
15:33 _Bryan_ joined #gluster
15:34 topshare joined #gluster
15:37 glusterbot New news from resolvedglusterbugs: [Bug 1147486] AFR : Self-heal daemon incorrectly retries healing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1147486> || [Bug 1104714] [SNAPSHOT]: before the snap is marked to be deleted if the node goes down than the snaps are propagated on other nodes and glusterd hungs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1104714> || [Bug 1146200] The memories are exhausted quickly when handle
15:42 tdasilva joined #gluster
15:43 lmickh joined #gluster
15:50 harish joined #gluster
15:54 rotbeard joined #gluster
15:55 glusterbot New news from newglusterbugs: [Bug 1162230] quota xattrs are exposed in lookup and getxattr <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162230>
16:02 lpabon joined #gluster
16:08 nbalachandran joined #gluster
16:13 verdurin joined #gluster
16:15 rwheeler joined #gluster
16:20 bennyturns joined #gluster
16:23 soumya__ joined #gluster
16:24 lpabon joined #gluster
16:31 kumar joined #gluster
16:34 RameshN joined #gluster
16:38 cfeller_ joined #gluster
16:43 dusmant joined #gluster
16:45 bala joined #gluster
16:45 ^rcaskey joined #gluster
16:46 aravindavk joined #gluster
16:59 toecutter joined #gluster
17:02 kanagaraj joined #gluster
17:05 _Bryan_ joined #gluster
17:15 RameshN joined #gluster
17:17 PeterA joined #gluster
17:18 ProT-0-TypE joined #gluster
17:19 davemc JoeJulian, would you be up for a quick video chat on your new role as Chair for Gluster?
17:19 ricky-ticky joined #gluster
17:21 virusuy hi everyone
17:22 elico hey virusuy
17:22 elico how are you?
17:22 virusuy elico: good , fightning with and strange issue in our glusters servers
17:22 elico what version? what OS?
17:22 virusuy in certain folders, a I see duplicated sub-folder
17:23 elico virusuy: ...?? what OS ? what version?
17:24 virusuy ubuntu 12.04  | glusterfs 3.4.0
17:25 elico really weird..
17:25 virusuy those sub-folder (duplicated) are using the same inode number
17:25 elico hard links?
17:25 virusuy indeed
17:25 elico so these are hardlinks... what is the question?
17:25 virusuy they shouldn't
17:26 virusuy i mean, no one created those hard-links
17:26 elico does glusterfs support in any way in hard links?
17:26 virusuy i don't know
17:26 virusuy but, this is really weird
17:27 virusuy i'm trying to reproduce this issue
17:27 virusuy in order to find "a path" to follow
17:27 elico as far as my knowledge knows is that basic glusterfs should not be using hardlinks.
17:28 DoctorWedgeworth joined #gluster
17:30 DoctorWedgeworth I have a two brick replicated volume on two servers and I wanted to move it to two new servers. Adding bricks doesn't seem to work with replica 2. If I move one of the bricks to the a server, will clients connected via NFS to the old brick location still be able to access the data?
17:30 jobewan joined #gluster
17:35 harish joined #gluster
17:40 PeterA How to deal with the heal-failed with only gfid?
17:40 PeterA tried the gfid-resolver.sh and links to no file….:(
17:43 hagarth PeterA: you can follow the gfid from .glusterfs in the brick directory
17:43 hagarth a gfid of form 0xabcdef.. would be located in  .glusterfs/ab/cd/ directory
17:44 PeterA say 262e4810-b493-40b9-8bb3-74e81dd74044
17:45 hagarth get into .glusterfs/26/2e/
17:46 PeterA getting this: -rw-r--r-- 1 dtran webqa 107750 Oct  1  2012 262e4810-b493-40b9-8bb3-74e81dd74044
17:46 glusterbot PeterA: -rw-r--r's karma is now -12
17:47 plarsen joined #gluster
17:47 hagarth hmm, you can probably cat the file to identify it from contents
17:47 PeterA yes i can!
17:47 hagarth the hardlink count seems to be 1 .. is this a unlinked file by any chance?
17:48 PeterA but how can I tell which directory it locate in gfs?
17:48 PeterA this is a replicate 2 gfs
17:48 PeterA if a brick is down, would that lost the link?
17:49 hagarth PeterA: unlikely for the link to be lost if the brick is down
17:49 PeterA hmm....
17:50 lalatenduM joined #gluster
17:50 hagarth is the link count same in both bricks?
17:51 PeterA oh…..i think the replica is on the brick that is down
17:52 daMaestro joined #gluster
17:53 PeterA I am trying to look for the same gfid in the cluster and seems like it's on the brick that the node is down...
17:53 hagarth PeterA: is the gfid present only on the brick present on the node that is down?
17:54 PeterA no
17:54 PeterA at least i can see that on one of the node that is up :)
17:54 PeterA i have a 6 node gluster
17:54 PeterA one is down :(
17:55 PeterA and getting tons of heal-failed
17:55 PeterA but only shown as gfid
17:56 spoxaka joined #gluster
17:57 hagarth PeterA: can you perform a ls -ld on .glusterfs/26/2e
17:57 PeterA drwx------ 2 root root 4096 Nov  9 09:07 2e
17:58 glusterbot PeterA: drwx----'s karma is now -1
18:02 PeterA seems like it has 2
18:04 hagarth PeterA: can you not figure out the directory from the file's contents?
18:05 PeterA ?
18:05 PeterA have millions of files….with that user and group....
18:06 PeterA and havine 1024 heal-failed entries...
18:09 hagarth PeterA: and similar content across files?
18:10 PeterA that will be super hard but if that's the only way....
18:10 and` left #gluster
18:11 and` joined #gluster
18:14 DoctorWedgeworth I'm trying to migrate a brick to a new server (that's only just been installed). I've used gluster peer probe and it's connected, but if I try to use the same path for the new brick it says the brick is already in use. If I try to use another path it says brick does not exist. I assume I'm missing a simple step?
18:24 jackdpeterson joined #gluster
18:26 jackdpeterson Hello all, last week I added a brick to my glusterFS instance on Ubuntu 12.04 - Gluster 3.5.2-ubuntu1~precise1. Set-up: 2x nodes, 1 brick. Adding an additional brick I encounterd what I believe is https://bugzilla.redhat.co​m/show_bug.cgi?id=1113050
18:26 glusterbot Bug 1113050: high, unspecified, ---, kdhananj, ASSIGNED , Transient failures immediately after add-brick to a mounted volume
18:26 jackdpeterson Is this bug something that is being fixed as of 3.6?
18:29 jackdpeterson and/or has someone found a safe way to add a brick and get permissions fixed without a storage availability outage
18:31 justinmburrous joined #gluster
18:33 DoctorWedgeworth can I not use replace-brick to move one brick from a replica-2 volume to somewhere else? Do I need to delete a brick first?
18:46 diegows joined #gluster
18:52 zerick joined #gluster
18:54 ricky-ticky joined #gluster
19:02 _Bryan_ joined #gluster
19:05 daMaestro joined #gluster
19:05 rasinhas joined #gluster
19:10 theron joined #gluster
19:11 ws2k3 i have 2 servers both the servers have 3 disks(1 os 2 for data) should i do a software raid and make it 2 bricks ? or should i do no software raid and make the glusterfs volume 4 bricks ?
19:11 diegows joined #gluster
19:15 Maitre RAID0 two bricks.  ;)
19:20 ws2k3 Maitre so you would say make a software raid with 2 bricks ?
19:21 calisto joined #gluster
19:21 ws2k3 Maitre i am a little scared that if i disk fails it will break the orginal data to or is that not possible ?
19:26 thermo44 joined #gluster
19:26 SOLDIERz joined #gluster
19:27 thermo44 Hello guys!! What's the more stable version of glusterfs for centos?
19:27 thermo44 by default gives me 3.6 in epel
19:27 lalatenduM joined #gluster
19:27 thermo44 glusterfs-3.6.1-1.el6.x86_64.rpm
19:45 JoeJulian thermo44: 3.6 was just released last week. Personally, I'm recommending the 3.5 series for production.
19:46 thermo44 Joe, I tried to install the 3.5.2 and it says I don't met dependencies,
19:47 thermo44 what packages should I download to isntall manually?
19:47 thermo44 JoeJulian, 3.5 series it is...
19:51 JoeJulian jackdpeterson: It's a bug that is being fixed, but apparently did not make the 3.6.[01] release. I'm not sure what you mean about "permissions fixed". I didn't see anything in that bug about that.
19:51 jackdpeterson ah, permissions -- came up as part of NFS stale handlers
19:52 JoeJulian ws2k3: Totally depends on your use case. Engineer to meet your needs, don't define needs based on expectations.
19:52 jackdpeterson JoeJulian: so performing a chmod or chown during that transient failure would cause issues. I was able to reproduce the error once outside of production on Friday. AFter rebooting the servers I've so-far been unable to reproduce it. I'll keep playing around though
19:53 JoeJulian That's only using NFS mounts?
19:53 elico left #gluster
19:54 jackdpeterson JoeJulian: Yeah, I'm using NFS only mounts due to performance on the small files (PHP shop). Each datacenter mounts its respective gluster server
19:55 jackdpeterson and by DC I really mean Availability zone in AWS .. so it's actually in the same datacenter ... latencies are reasonably low
19:58 MacWinner joined #gluster
19:59 JoeJulian The only way I know of expiring fscache is to remount.
20:00 JoeJulian But that's what you're trying to do, expire fscache entries. Maybe google will have some better answers than that once you know where the problem probably is, I'm just not an nfs expert.
20:06 Mexolotl joined #gluster
20:06 Mexolotl hi all :)
20:14 cultav1x joined #gluster
20:17 elico joined #gluster
20:27 SOLDIERz joined #gluster
20:46 elico joined #gluster
20:51 Mexolotl hey I got a question, I want to build a distributed owncloud running on banana pi's with mysql cluster and glusterfs . Actually I got just one banana pi. So my question is can I build the geo replication later and install/run my owncloud till I got the second pi?
20:55 Mexolotl nobody here?
20:56 Mexolotl ...
20:59 chirino joined #gluster
21:06 Telsin you can certainly enable the geo-rep later, Mexolotl
21:06 julim joined #gluster
21:06 Telsin if you mean: can you start with a 1 node glusterfs "cluster" to be able to expand later, the answer is yes
21:09 Mexolotl okay thanks, so I the best way is to start with a geo rep 1 node glusterfs cluster :)
21:09 Telsin you going to have the banana pi's in different physical locations?
21:10 Mexolotl yepp thats the plan
21:10 Telsin geo-rep is one way, that may not work the way you're thinking it should
21:11 Mexolotl okay what would be the best way? The idea is to got to synched ownclouds on two (or more) different locations
21:12 Mexolotl *two
21:12 mucahit joined #gluster
21:13 Telsin well, you can build a gluster cluster across a wan, but your write latencies are going to suck ;)
21:13 Mexolotl hmm
21:13 Telsin I'd think about rsync, or a write master and read only distributed nodes
21:14 Telsin or do more research, those are just what popped into my head right now
21:14 mucahit Hello everybody, I installed and configured glusterfs. It works wonderfull and I made some benchmark with dd and results are crazy. Then I tried to install some packages then its very very very slow, I can delete 700mb directory in 10mins.
21:16 Mexolotl I don't need a real time sync, it would be fine to have a 24 hours sync...
21:16 Telsin rsync is probably the simplest solution for you then, especially if you aren't worried about simultaneous writes
21:16 Mexolotl but also the performance of my banana pi would going down for the normal cloud usage right?
21:17 anneb joined #gluster
21:17 Telsin if you wanted to do a write master, geo-rep could work for that
21:17 Telsin how so? you're still reading local files
21:18 Mexolotl hmm okay, I wil ltake a look to rsync
21:20 anneb Hi glusterfs, I am having boot-problems mounting glusterfs-client on glusterfs at the localhost on Ubuntu 14.04, glusterfs 3.4.2 version. Googled and found others that have the problem, but no real solutions yet.
21:21 anneb Any good pointers to docs I could check?
21:21 calisto joined #gluster
21:27 Mexolotl okay thank you telsin, I will give rsync a try
21:30 Telsin muchahit: 700mb directory of many small files? try it with a NFS mount. or something that libgfapi aware instead of over a fuse mount.
21:31 Telsin it has to confirm each file change on all your nodes, so you take a lot of latency hits on tons of small files
21:32 elico joined #gluster
21:33 Mexolotl ahh okay sry my mistake. Yes I want to do a write master, as I want to synchronize two clouds (two servers at different locations with different main users - the clouds will have the same users but different main users)
21:34 Mexolotl so geo rep will fit for me, hope that the performance won't be that poor ...
21:34 mucahit Telsin: I am using gluster for my Openstack. I am connecting it over cinder driver
21:35 Telsin that method probably won't be so bad
21:35 Telsin I use geo-rep to backup my hosted nodes to a remote loc behind a cable modem, ~70meg throughput capable and one it got synced, it seemed to keep up pretty well
21:39 ctria joined #gluster
21:39 Mexolotl is there a howto to set up a one node geo rep cluster? I'm totally new to cluster at all
21:41 JoeJulian Mexolotl: If I were doing what you're attempting, I would probably set my Pi's up as swift nodes and use that for my replication with the owncloud front end.
21:43 Mexolotl JoeJulian: what's the difference between glusterfs and swift?
21:43 JoeJulian mucahit: That delay is probably openstack zeroing out the image before it deletes it.
21:45 mucahit Thanks joejulian I will check it but its really wierd
21:46 mucahit I was thinking to move my mail server to glusterfs but i think I cant do it with openstack+gluster
21:47 _Bryan_ joined #gluster
21:47 samsaffron___ joined #gluster
21:54 JoeJulian mucahit: I would just make sure that you've configured openstack to use libgfapi.
21:57 _Bryan_ joined #gluster
21:59 mucahit JoeJulian: I followed this document http://www.gluster.org/community/docu​mentation/index.php/GlusterFS_Cinder
21:59 _BryanHM_ joined #gluster
22:00 elico joined #gluster
22:03 sijis left #gluster
22:05 ctria joined #gluster
22:07 anneb ubuntu 14.04 boot problem, continued: when mounting to remote glusterfs in fstab, their are soma [FAIL] messages during boot, but at least system starts and client is mounted
22:08 ws2k3 JoeJulian i have 2 servers with both 3 disks( 1 OS 2 data) what would you recommand raid1(linux) and 2 bricks raid0(linux) and 2 bricks or 4 bricks in glusterfs (i want to make a replicated volume) IO is not realy that importante but its very critical data
22:09 Maitre ws2k3: if your gluster replication is working correctly, you won't lose any data.  :P
22:09 Maitre But yeah the downside is, if you lose any single drive, you have to rebuild the whole array from nothing.
22:10 Maitre Not much different from rebuilding a raid10 array really, in terms of data transfer.
22:10 Maitre (But with gluster you gotta rebuild manually, afaik.)
22:10 theron joined #gluster
22:10 Maitre (Note: I've only been using gluster for like 2 weeks, here.)  XD
22:11 ws2k3 Maitre yes but if i wanne go for safety should i do raid1 for linux or just 4 bricks in gluster ?
22:11 ws2k3 raid1 with 2 bricks or just 4 bricks and no raid
22:12 Maitre RAID1 would be more convenient.
22:12 Maitre Possibly more efficient, I'm not sure.
22:13 Maitre I would guess data writes would be about the same either way.
22:29 anneb ubuntu local mount a described here https://github.com/gluster/glus​terfs/tree/master/extras/Ubuntu doesn'nt work for me.
22:29 glusterbot Title: glusterfs/extras/Ubuntu at master · gluster/glusterfs · GitHub (at github.com)
22:30 anneb Anyone have glusterfs on ubuntu 14.04 locally mounted and boot working?
22:30 _dist joined #gluster
22:31 Maitre Yeah I do.
22:31 Maitre But I just put the mount commands in rc.local.
22:32 Maitre Of course it doesn't work from mountall, the service isn't running yet at that point in the boot process.
22:33 anneb I suppose the ubuntu upstart system somehow supports service dependencies?
22:34 calisto1 joined #gluster
22:35 anneb I will have a look at the rc.local options (or use @reboot in crontab?)
22:40 badone joined #gluster
22:49 Maitre What is it you want to do?  Just mount automatically on bootup?
22:50 Maitre I'm not sure what @reboot is.  rc.local is just a shell script that runs at the end of boot sequence.
22:54 coredump joined #gluster
23:17 longshot902 joined #gluster
23:40 anneb Thanks, I used rc.local (sort of autoexec.bat?), mounting now works after boot.
23:41 anneb I will add a comment in /etc/fstab so that future admins understand what is going on
23:42 Durzo Hi all, just upgraded from 3.4 to 3.5 and I notice the part of geo-replication.. it is not clear though whether the new geo-repl still supports local file:// based replication ?
23:46 _Bryan_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary